article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
one of the basic implications of quantum mechanics is that there exist incompatible experimental setups .for example , as it was originally initiated by werner heisenberg , the measurements of position and momentum of a quantum particle can not be performed simultaneously unless some imprecisions are introduced .the fact that position and momentum are not experimentally compatible physical quantities reflects the very properties of the quantum theory leading to the concept of coexistence .in general , the coexistence of quantum devices means that they can be implemented as parts of a single device . until now , the coexistence relation has been studied among quantum effects and observables ; see e.g. and references therein . however , in addition to observed measurement outcome statistics we can also end up with a quantum system .quantum operations and instruments are used to mathematically describe both : probabilities of the observed measurement outcomes and conditional states of the measured quantum system post - selected according to observed outcomes . compared to an effect , an operation describes a particular result of a quantum measurement on a different level , providing more details about what happened during the measurement .the topic of this paper the coexistence of quantum operations is thus a natural extension of the previous studies of the coexistence of effects .let us now fix the notation and set the problem in mathematical terms .let be a complex separable hilbert space .we denote by and the banach spaces of bounded operators and trace class operators on , respectively .the set of quantum states ( i.e. positive trace one operators ) is denoted by and the set of quantum effects ( i.e. positive operators bounded by the identity ) is denoted by . an _ operation _ is a completely positive linear mapping on such that } \leq 1\ ] ] for every .an operation represents a probabilistic state transformation .namely , if is applied on an input state , then the state transformation occurs with the probability } ] .a special class is formed by operations satisfying }=1 ] .thus , under the choi - jamiolkowski isomorphism operations on are associated with a specific subset of operators on . unlike kraus operators, the choi - jamiolkowski operator for a given operation is unique . in terms of choi - jamiolkowski operators. reads }=a^t\ , .\ ] ] for each operation , there is a minimal number of operators needed in its kraus decomposition .this number is called the _ ( kraus ) rank _ of , and we call any kraus decomposition with this minimal number of elements a_ minimal kraus decomposition_. ( notice , however , that even the choice of minimal kraus decomposition is not unique . )moreover , the rank of the associated choi - jamiolkowski operator equals to the kraus rank of the operation , i.e. . operations with kraus rank 1 are called _pure_. they are exactly the extremal elements in the convex set of all operations .the following two classes of operations , namely , conditional state preparators and lders operations , will be used later to exemplify the coexistence conditions .[ ex : preparator ] a _ conditional state preparator _ is an operation of the form } \xi\ ] ] for some fixed and . if , then this operation is just the constant mapping . for the conditional state preparator , the associated choi - jamiolkowski operator reads let be an effect . the _ lders operation _ associated to is defined by the formula the associated choi - jamiolkowski operator is given by with }={\left\langle\,\psi_a\,|\,\psi_a\,\right\rangle}=\frac{1}{d}{\textrm{tr}\left[a\right]}\leq 1\ , .\ ] ] if , then the corresponding lders operation is the identity operation and we get .let us also notice that if is proportional to a one - dimensional projection , i.e. , for some , then is the conditional state preparator .namely , we have }p = { \textrm{tr}\left[\varrho a\right ] } p \ , .\ ] ] in all other cases , when rank , lders operation is not a conditional state preparator since for a conditional state preparator we have , but .coexistence of two operations and is conditioned by the existence of a four - outcome instrument determined by four operations through such that and . ]we now make a simple but useful observation related to def .[ def : coexistent ] .suppose that and are coexistent operations and that is an instrument such that .this means that there are outcome sets such that .we define another instrument with outcomes by setting it follows from the properties of that is indeed an instrument .the operations and are in the range of as and .thus , we conclude the following . [ prop : basic ] two operations are coexistent if and only if they are in the range of an instrument defined on the outcome set .the fact stated in prop .[ prop : basic ] simplifies the study of the coexistence relation as we need to concentrate only on four outcome instruments .an illustration of a four outcome instrument and two coexistent operations is depicted in fig .[ fig : coexistence_of_ops ] .[ prop : coex - kraus ] two operations and are coexistent if and only if there exists a sequence of bounded operators and index subsets such that and if and are coexistent , we can choose an index set with at most elements .suppose first that there exists a sequence of bounded operators with the required properties . by defining get an instrument having both and in its range .hence , and are coexistent .suppose then that and are coexistent . as we have seen in prop .[ prop : basic ] , there exists a four outcome instrument such that and . choose a kraus decomposition for each .the union forms a collection with the required properties .the last claim follows by noticing that each operation has a kraus decomposition with ( at most ) kraus operators . on the other hand ,the role of the operation is only to guarantee the normalization of the instrument .we can hence re - define as the operation having a single kraus operator .let us note that the statement of prop .[ prop : coex - kraus ] remains valid if the eq .is replaced with an inequality and then the number of the elements in can be chosen to be at most .we can hence use either condition or , depending on which one happens to be more convenient . in the followingwe formulate the basic coexistence criterion of prop .[ prop : coex - kraus ] in terms of choi - jamiolkowski operators .[ prop : coex - cj ] two operations and are coexistent if and only if there exists a state with }={i} ] meaning that is a state in .the claim then follows from prop .[ prop : basic ] .[ ex : unitary ] let be a unitary operator and the corresponding unitary channel , i.e. , . as describes a deterministic and reversible state transformation , it is not expected to be coexistent with many other operations . of course , we can reduce by accepting the state transformation with some probability and ignoring the rest , hence obtaining an operation .thus , and are coexistent operations .a proof that are indeed the only operations coexistent with the unitary channel can be seen from prop .[ prop : coex - cj ] .the choi - jamiolkowski operator corresponding to is , where in particular , is a one - dimensional projection and it can be written as a sum of two positive operators only if they are proportional to . on the other hand , there can not be other operators in the decomposition of as is already normalized , ={i} ] . aschoi - jamiolkowski operators are effects on we can formally consider their coexistence .our aim is to investigate the relation between the coexistence of operations and the coexistence of choi - jamiolkowski operators as effects .if and are coexistent , then the linearity of the choi - jamiolkowski isomorphism guarantees that effects and are coexistent , too . however , the converse is not true .namely , even if and are coexistent as effects , the associated operations need not be coexistent .for example , according to proposition [ prop : rank-1 ] two rank-1 operations and are coexistent only if they are trivially coexistent . however , if are one - dimensional projections associated with vectors , then and , which are always ( trivially ) coexistent as effects , because .the point is that does not correspond to any operation , because =d^2i\not\leq i$ ] .table [ tab : relations ] summarizes the mentioned results .we see that the remaining problem is the following : if and are coexistent effects but do not satisfy , what are the coexistent operations and ? the following examples demonstrate different aspects of this general problem ..[tab : relations ] relations between coexistence of effects and coexistence of their compatible operations .[ cols="<,^ , < " , ] let and be two conditional state preparators such that and are pure states , i.e. , for some unit vectors .a kraus decomposition of , with kraus operators , is of the form for some vector , or a sum of these kind of operators .similarly , a kraus decomposition of , with kraus operators , is either for some vector , or a sum of these kind of operators .suppose that and are coexistent , but the inequality does not hold .this implies that for some indices and . as a consequence, we must have .we conclude that if then the conditional pure state preparators are coexistent only if .it is customary to call an effect _ trivial _ if it is of the form for some .trivial effects are exactly those effects which are coexistent with all the other effects . in the same way, we can call an operation trivial if it is coexistent with all the other operations .clearly , the null operation is trivial in this sense since any instrument can be expanded by adding one additional outcome and attaching to this additional outcome . actually , the null operation is the only trivial operation .as shown in example [ ex : unitary ] a unitary channel is coexistent only with operations . since a trivial operation is coexistent with all unitary channels, it must be the null operation .in this paper we have studied the coexistence of two quantum operations . in particular , we have shown that two common types of operations in quantum information , namely conditional state preparations and lders operations , are coexistent only under some very restrictive conditions .we have also shown that the coexistence problem for operations does not reduce to the coexistence problem for effects .recently , coexistence of two arbitrary qubit effects has been characterized .it would be interesting to give an analogous characterization of two arbitrary qubit operations .this problem , however , seems to be much more intricate as already the parametrization of the qubit operations is quite a complex task . in quantum information theory ,it has become typical to consider impossible devices , forbidden by the rules of quantum mechanics .for an impossible device , one can then study its best approximative substitute .especially , we can ask for the best coexistent approximations for two non - coexistent lders operations . this problem will be studied elsewhere .t.h . acknowledge financial support from quantop and academy of finland . d.r ., p.s . and m.z .acknowledge financial support via the european union project hip fp7-ict-2007-c-221889 , and via the projects apvv-0673 - 07 qiam , op ce qute itms nfp 262401022 , and ce - sas qute .m.z . acknowledges also support of gar via project ga201/07/0603 .
quantum operations are used to describe the observed probability distributions and conditional states of the measured system . in this paper , we address the problem of their joint measurability ( coexistence ) . we derive two equivalent coexistence criteria . the two most common classes of operations lders operations and conditional state preparators are analyzed . it is shown that lders operations are coexistent only under very restrictive conditions , when the associated effects are either proportional to each other , or disjoint .
spatial data arise when outcomes and predictors of interest are observed at particular points or regions inside a defined study area .spatial data sets are common in many fields including environmental science , economics , and epidemiology . in epidemiology ,understanding the underlying spatial patterns of a disease is an important starting point for further investigations .the risk of disease inherently varies in space because the risk factors are non - uniformly distributed in space .such risk factors may include lifestyle variables such as alcohol and tobacco use or exposure levels of environmental causes of disease such as air pollution or uv radiation .we expect that these risk factors are positively correlated in space meaning that nearby areas will have similar exposure levels or underlying characteristics .that is , we assume risk factors obey tobler s first law of geography : `` everything is related to everything else , but near things are more related than distant things '' . in many studies ,underlying disease risk factors are unknown or unmeasured .bayesian models account for unknown or unmeasured risk factors using priors chosen to mimic their correlation structure .the most common bayesian framework for area - level spatial data uses gaussian random effects with a covariance structure that imposes positive spatial dependence between random effects of neighboring or near - by areas .the non - gaussian spatial clustering and potts model based priors also impose positive dependence in the relative risks of neighboring areas .more recently , several authors have developed modifications to existing models , specifically to preserve positive dependence for spatial statistics applications .further , positive spatial dependence is usually imposed in geostatistical models for data observed point - wise rather than area - wise .for example , the matrn family of marginal covariance functions for gaussian random fields yields positive correlations between observations at two locations locations , with the magnitude of the correlation decreasing with distance .we present a bayesian model for area - level count data that uses gaussian random effects with a novel type of g - wishart prior on the inverse variance covariance matrix .the usual g - wishart or hyper inverse wishart prior restricts off - diagonal elements of the precision matrix to 0 according to the edges in an undirected graph . use the g - wishart prior to analyze mortality counts for ten cancers in the united states using a bayesian hierarchical model incorporating gaussian random effects with a separable covariance structure .their comparisons show that allowing different strengths of association between paris of neighboring states can have advantages over traditional conditional autoregressive priors that assume the same strength of conditional association across the study region .however , the g - wishart prior allows for both positive and negative conditional associations between neighboring areas . the truncated g - wishart distribution that we introduce only has support over precision matrices that lead to positive conditional associations .we describe markov chain monte carlo ( mcmc ) algorithms for this new prior and construct a bayesian hierarchical model for areal count data that uses the truncated g - wishart prior for the precision matrix of gaussian random effects .we show via simulation studies that risk estimates based on a model using the truncated g - wishart prior are better than those based on conditional autoregression when the outcome is rare and the risk surface is not smooth . for univariate data ,there is little information to identify the parameters of the spatial precision matrix ; however , we can share information across outcomes in a multivariate model by assuming a separable covariance structure .we illustrate the improvement of using the truncated g - wishart prior in a separable model ( measured via cross - validation ) using cancer incidence data from the washington state cancer registry .the structure of this paper is as follows . in section 2 ,we present our modeling framework and give a brief overview of conditional autoregressive models . in section 3 , we define the truncated g - wishart distribution and give the details of an mcmc sampler for estimating relative risks in a spatial statistics context . in section 4 ,we present a simulation study based on univariate disease mapping using the geography of the counties of washington state . finally , in section 5 , we extend the univariate truncated g - wishart model to multivariate disease mapping using the separable gaussian graphical model framework of .let be a set of non - overlapping geographical areas , and let represent the set of counts of the observed number of health events in these areas .possible health events include deaths from a disease , incident cases of a disease , or hospital admissions with specific symptoms of a disease .next , let be the set of expected counts and be a matrix where is a vector of suspected risk factors measured in area . the expected counts account for differences in known demographic risk factors .if the population in each area is stratified into groups ( e.g. , gender and 5 year age - band combinations ) , then the expected count for each area is where is the population in area in demographic group and is the rate of disease in group .the rates may be estimated from the data if the disease counts are available by strata ( internal standardization ) or they may be previously published estimates for the rates of disease ( external standardization ) . a generic bayesian hierarchical model for data of this type is : where is the vector of counts with area excluded and is a probability distribution with spatial structure .most choices of encode the belief that the residual spatial random effects , , of nearby areas have similar values .this restriction follows from the interpretation of the random effects as surrogates for unmeasured risk factors , which are generally assumed to be positively correlated in space .the inclusion of produces smoother ( though biased ) estimates of the vector of relative risks , , with reduced variability compared to the maximum likelihood estimates .these maximum likelihood estimates , called standardized incidence ratios ( sirs ) or standardized mortality / morbidity ratios ( smrs ) , have large sampling variances when the expected counts are small . a key task in modeling areal count datais to choose a prior that is flexible enough to adapt to the smoothness of the risk surface .the most common choice for is the gaussian conditional autoregression or car prior , which is a type of gaussian markov random field . the car model for a vector of gaussian random variablesis defined by a set of conditional distributions .the conditional distribution for the random variable , , given the other variables , , is the joint distribution of the vector is a mean - zero multivariate normal distribution with precision , where , , and .this is a proper joint distribution if is a symmetric , positive definite matrix .the _ intrinsic conditional autoregression _ or icar prior is the most commonly used prior for spatial random effects within the class of car priors . under the icar prior ,the conditional mean for a given random effect is the weighted average of the neighboring random effects , and the conditional variance is inversely proportion to the sum of these weights : here is nonzero if regions and are neighbors ( i.e. , share a border ) and 0 otherwise ; is the sum of all of the weights for a specific area . a binary specification for frequently used , though other weights that incorporate the distance between areas can also be used . in the binary case , for neighboring regions and , the number of regions that border area . under this specification , the conditional mean for a particular random effect is the average value of the random effects for the neighboring regions , and the conditional variance is inversely proportional to the number of neighbors of the area . use a car prior for spatial random effects in a disease mapping context in what has become known as the _ convolution model _ : here is a non - spatial random effect and is a spatial random effect .the prior for is , and the prior for is the icar prior .though popular , the convolution model has several drawbacks .first , there are only two parameters ( and ) to control the level of smoothing with only one of these ( ) contributing to the spatial portion of the model .this parsimony is ideal for estimating a smooth risk surface in the presence of large sampling variably , which is a common issue for rare diseases or for small area estimation .however , using icar random effects can lead to over - smoothing , which masks interesting features of the risk surface , including sharp changes .several authors have addressed this issue by incorporating flexibility in the conditional independence structure of the relative risks .these approaches are fairly parsimonious , but estimating the parameters requires careful reversible jump mcmc or access to data from previous years . in contrast, we develop a locally - adaptive approach with a separate parameter for the strength of spatial association between each pair of neighboring areas while preserving the conditional independence structure .a second drawback is that the icar prior is improper .the joint distribution implied by the conditional specification in ( [ icar ] ) is a singular multivariate normal distribution with precision matrix , where is a diagonal matrix with elements . since each row of sums to , this precision matrix does not have full rank , and the joint prior for is improper .one way to alleviate both the over smoothing and the singularity issues is through the addition of a spatial autocorrelation parameter : this specification is called the proper car because it gives rise to a proper joint distribution as long as is between the reciprocals of the largest and smallest eigenvalues of .for the binary specification of , this always includes .the relationship between and the overall level of spatial smoothing in the proper car prior is complex .the prior marginal correlations between the random effects of neighboring areas increase very slowly as increases , with substantial correlation obtained only when is very close to .further , as increases , the ordering of these marginal correlations is not fixed .nonetheless , the icar prior remains a popular choice for spatially correlated errors in many applied settings .the conditional specification in ( [ icar ] ) is parsimonious , and one only needs to specify a single prior for the precision of the spatial random effects .prior specification has received some attention in the literature .further , off - the - shelf mcmc routines for the icar and convolution models are available in winbugs and various ` r ` packages .fast computation of approximate marginal posterior summaries is available using integrated nested laplace approximation ( inla ) .an alternative to specifying the prior for spatial random effects based on a set of conditional distributions is to work directly with the joint distribution . a gaussian graphical model or covariance selection model is a set of joint multivariate normal distributions that obey the pairwise conditional independence properties encoded by an undirected graph , .this graph has two elements : the vertex set and the edge list .the absence of an edge between two vertices corresponds to conditional independence and implies a specific structure for the precision matrix of the joint distribution .if follows a multivariate normal distribution with precision matrix , then follows a gaussian graphical model if for any pairs and . here is the vector excluding the and elements .the conjugate prior for the precision matrix in the gaussian setting is the wishart distribution , which is a distribution over all symmetric , positive definite matrices of a fixed dimension .the wishart distribution has two parameters .the first is a scaler , which controls the spread of the distribution .the second is an matrix , which is related to the location of the distribution . for , and mode .the g - wishart distribution is the conjugate prior for the precision matrix in a gaussian graphical model .the g - wishart distribution is a distribution over , the set of all symmetric , positive definite matrices with zeros in the off - diagonal elements that correspond to missing edges in .the density of the g - wishart distribution for a matrix is where is the trace of .the normalizing constant has a closed form when is a decomposable graph and can be estimated for general graphs using the monte carlo method proposed by .we propose a new g - wishart distribution called the truncated g - wishart distribution that imposes additional constraints on .this is a distribution over positive definite matrices where the off - diagonal elements that correspond to ( non - missing ) edges in are less than .this restriction means that all pairwise conditional ( or partial ) correlations are positive because this restriction is attractive in a spatial statistics context where we believe neighboring areal units are likely to be similar to each other , given the other areas . if follows a truncated g - wishart distribution , then here is the unknown normalizing constant , and is the set of matrices with negative off - diagonal elements .the normalizing constant in ( [ gwdistro ] ) is finite as long as and .the normalizing constant in ( [ ngwishdistro ] ) is finite under the same conditions because the support of the truncated g - wishart is a subset of the support of the g - wishart distribution .the mode of the truncated g - wishart is again , and for this reason we only consider . in this paper, we write for the truncated g - wishart distribution and for the g - wishart distribution . and transform to the cholesky square root , which we call , because it is easier to handle the positive definite constraint in the transformed space . in the g - wishart case ,the elements of are either variation independent or are deterministic functions of other elements .we call the off - diagonal elements of that correspond to missing edges in the graph `` non - free . '' these are deterministic functions of the `` free '' elements : the diagonal elements and the off - diagonal elements corresponding to edges in g. if we restrict to the space , we have the following constraints on the off - diagonal elements of the cholesky square root : the first two conditions guarantee that .the addition of the third inequality guarantees that ; however , this restriction comes at the cost of losing variation independence ( i.e. , the parameters space of is no longer rectangular ) .we sample from the truncated g - wishart distribution using a random walk metropolis hastings algorithm similar to the sampler proposed by .we sequentially perturb one free element at a time , holding the other free elements constant . in doing so, we must find the support of the conditional distribution of given the other elements .the support of this conditional distribution is the set of that satisfy inequalities ( [ diagfree])([myineq ] ) when the free elements , the left - hand sides of ( [ diagfree ] ) and ( [ myineq ] ) , are fixed . for each specific graph and fixed pair , we can write the inequalities in ( [ myineq ] ) as where is the set of fixed , free elements of excluding and .we construct by substituting the equalities from ( [ completion ] ) for all of the non - free elements that depend on .each is ( at worst ) a quadratic function of .when is a linear function , solving for gives a solution set of the form , where .when is quadratic , the solution set is , where is again negative . if in lexicographical order , then the upper bound for can not depend on .depending on the graphical structure , there are pairs such that the bound for does not depend on . in these cases .the conditional distribution of a free element given all other free elements is a continuous distribution over an open subinterval of given by we now give the analogous theorem for free , diagonal elements : the conditional distribution of a free element given other free elements is a continuous distribution over a subinterval of given by for proofs , see the supplementary material .we use these bounds to construct a markov chain with stationary distribution equal to the truncated g - wishart distribution .suppose is an upper - triangular matrix at iteration such that .for each free element in do the following : 1 . calculate the upper and lower limits for as described above .sample from a truncated normal with these limits , mean , and standard deviation .3 . update the non - free elements in lexicographical order .these steps give a proposal where the free elements in equal to the free elements of except in the entry .4 . accept according to the acceptance probability , where is the density of a normal distribution with mean and standard deviation truncated to the interval , and is the number of areas that are neighbors of area but have larger index numbers , that is , .the speed of this sampler depends on both the number of areas and on the number of edges in the adjacency graph .these determine the number of non - zero elements in and the number of non - zero elements in .the elements of can be reordered to form a banded matrix .the size of the bandwidth depends on the proportion of non - missing edges ( i.e. , the edge density ) , and the bandwidth of is the same as .thus reordering the elements of can create sparsity in , which reduces the number of nonzero terms in .figure [ fig : scale ] shows the time to one thousand iterations for graphs with different numbers of nodes and edges , averaging over 50 simulated networks for each size - density combination . for each simulation , we randomly sample networks with a given size and density and reorder the elements using a bandwidth - decreasing algorithm ( the reverse cuthill mckee algorithm , available in the ` spam ` package ) .the sampler scales well for very sparse networks , but the time to 1000 iterations grows quickly when the edge density is over % .the edge densities of the counties in washington state and the states in the continental us are and , respectively.=-1 , and the edge density of the continental us and the district of columbia is . ]we use truncated g - wishart prior within the generic bayesian hierarchical model for areal counts given in section 2 : we suggest choosing the hyper parameters for the priors on and by first specifying a reasonable range for the average relative risk and then finding values of and that match this range for a fixed value of .for fixed , the distribution of is a univariate normal distribution depending on and . using the adjacency matrix of washington state as an example and letting , of the prior on is between when and . for a more informative prior, setting gives a range of .more details of this prior specification framework are in the supplementary material .the prior on the spatial autocorrelation parameter was introduced by for computational convenience and to reflect the fact that large values of are needed to achieve non - negligible spatial dependence in the proper car prior . use a continuous uniform prior on and a prior in a similar multivariate context . for our purposes , using a discrete prior for is essential for carrying out mcmc because appears in the normalizing constant of the prior on .that is , the normalizing constant in ( [ ngwishdistro ] ) becomes .as will be shown below , we pre calculate ratios of these normalizing constants in advance .it is not practical to repeat this process at each step of the mcmc .we estimate the posterior distribution of the relative risks , , using mcmc .most of the transitions are standard metropolis or gibbs updates ( see supplementary material ) except for the updates on the precision matrix and the autocorrelation parameter .we update as described in section 3.3 , skipping over to preserve the restriction on .we update by choosing the next smallest or largest value in , each with probability .if and are not on the boundary of this list , then the acceptance probability is where \label { acceptrho } \\ & \,\,\,\,\,\,\,\,\,\,\,+ \log\left[i_2\left(g , \delta , ( \delta- 2 ) { \mathbf{d}}(\rho_t)\right)\right ] - \log\left[i_2\left(\delta , ( \delta- 2 ) { \mathbf{d}}(\rho')\right)\right ] .\nonumber\end{aligned}\ ] ] if either or is on the boundary , there is an extra factor of because the proposal is not symmetric : if , we propose with probability . because the graph is constant , the normalizing constants in ( [ acceptrho ] )only depend on .we estimate the necessary ratios of normalizing constants and store them in a table prior to running the full mcmc . for two densities of the form and with normalizing constants and , the ratio of normalizing constants is given by ] .* estimate - \log[i_1(g,3 , { \mathbf{d}}(\rho_2))] ] for each pair , we average over the estimates from parallel chains of iterations .figure 6 in the supplementary material shows the evolution of the estimates of - \log[i_1(g,3 , ( { \mathbf{d}}_w - 0.98 { \mathbf{w}})^{-1 } ) ] $ ] using the adjacency graph of the counties in washington state .in section 5 , we use the truncated g - wishart prior to analyze incidence data from the washington state cancer registry . in doing so, we adopt the same framework as and assign a matrix normal prior with a separable covariance structure to the log relative risks .this means we assume that the covariance in the log relative risks factors into a purely spatial portion and a purely between - outcomes portion .this assumption is common for modeling two - way data including multivariate spatial data and spatio - temporal data as well as multi - way data .here we assume that there are areas with counts for cancer sites ( site of primary origin of the cancer ) observed in each area .if is a matrix of observed counts and is a matrix of expected counts , then we have we use to denote the matrix normal distribution with separable covariance structure .that is , where `` '' is the kronecker product . in the absence of any information on cancer risk factors such as smoking rate or a socioeconomic summary measure , we only include an overall rate for each cancer in the mean model , that is , .the row covariance describes the spatial covariance structure of the log relative risks .the column covariance matrix describes the covariance between the cancers .we incorporate the truncated g - wishart distribution as the prior for the spatial precision matrix , and we use a g - wishart or wishart prior with mode equal to the identity matrix for . when the prior on is a g - wishart prior , we incorporate uncertainty in the between - cancer conditional independence graph using a uniform prior over all graphs . for both priors , we restrict for identifiability .finally , we use an independent normal prior on each .we estimate the relative risks under this model using an mcmc sampler identical to that in , substituting in the sampler from section 3.2 for the update on .the assumption of separability yields a more parsimonious covariance structure and can yield more stable estimation than with a full , unstructured covariance matrix .conditioning on one precision matrix forms ` replicates ' for estimating the other : thus , if is known , then the sample size for estimating is equal to the number of rows , and similarly , if is known , the sample size for estimating ( and ) is equal to the number of columns .this factorization appears in the iterative algorithm for finding the maximum likelihood estimates of the matrix normal distribution as well as in the gibbs sampler when using the conjugate wishart prior with matrix or array normal data .we compare the univariate disease mapping model using the truncated g - wishart prior to three other models in a simulation study based on a similar study in .the purpose of this simulation study is to investigate the potential of the truncated g - wishart prior in a bayesian hierarchical model for a single realization of a disease outcome in each area .we also directly compare the g - wishart to the standard gaussian markov random field formulation in a univariate context , which has not previously been done in the literature .we find that the more flexible g - wishart priors can be advantageous when the underlying disease risk surface has sharp changes , but there are serious concerns related to estimating a large number of covariance parameters .we illustrate a more realistic example relying on the assumption of separability in section 5 .we use the counties in washington state as our study region and generate expected counts based on the age - gender structure of these counties in the census and published rates for larynx , ovarian , and lung cancer in the united kingdom in 2008 .these three cancers are chosen to represent a range of disease incidence from rare to common .a map of the counties with the underlying undirected graph is shown in figure [ adjacency ] , and the distributions of expected counts for each cancer are shown on the log scale in figure [ expcounts ] .we generate the risk surface as the combination of a globally - smooth surface and a locally - constant surface .we label each area or using a potts model so that neighboring areas are more likely to have the same label .the label allocation for this simulation study is shown in figure [ simlabs ] .for each simulation , we generate where is the label assigned to county .we simulate and independently from multivariate normal distributions with matrn covariance function with smoothness parameter and range chosen so that the median marginal correlation is .thus , each of the vectors and are realizations of a smooth spatial process observed at a finite set of points . in different simulations , we set to , , or .larger values of lead to a risk surface with more discontinuities .we generate realizations from each combination of and the three sets of expected counts . ) for simulation study . ] for the simulation results described below , we run each chain for iterations , discarding the first half as burn in .we set the prior parameters for the model in section 3.3 to , , , and .figure 3 in the supplementary material shows the evolution of the posterior mean for different chains for two elements of the cholesky square root and two random effects . in all cases, we reach convergence in about iterations .we compare the model using the truncated g - wishart prior to three other models .the model using the g - wishart prior is identical to the model from section 3.3 except that the prior on the precision matrix is the g - wishart prior instead of the truncated g - wishart prior .we also compare against the convolution model from section 2.2 and a similar model that includes only spatial random effects with an icar prior . in the convolution and icar models, we estimate the posterior mean and variance of the relative risks using inla . for the models using truncated g - wishart and g - wishart priors, we explore the posterior distributions using mcmc . in figure [fig : shrinktgwgw ] , we compare the true spatial random effects against the posterior estimates of the random effects for the truncated g - wishart model and the g - wishart models from one simulation for each set of expected counts .the estimates of the random effects are similar to the true values when the expected counts are high , but there is substantial shrinkage toward the prior mean of zero when the expected counts are small .this reflects the fact that there is much more information about the relative risks when the counts are larger , and we see the same relationship in other disease mapping models . under the truncated g - wishart ( tgw ) and g - wishart ( gw ) models .the posterior estimates shrink toward the prior mean of zero as the expected counts decrease . ]we compare the four methods using the root - averaged mean squared error ( ramse ) of the posterior mean of each relative risk .this is the square root of the mean squared error averaged over all simulations and all areas .for simulations and iterations of the mcmc sampler , the ramse is where it the true relative risk for area in simulation and is the corresponding value at iteration of the mcmc .the results of this simulation are shown in figure [ simres ] , and the triangle indicates the lowest ramse within each scenario .in general , the ramse decreases for all four models when the expected counts increase , and the ramse increases when the level of smoothing decreases ( i.e. , m increases ) . the model using the truncated g - wishart prior performs the best in six out of nine scenarios ,and we see the greatest benefit in the larynx , simulation when the expected counts are low and the local discontinuities in the risk surface are most prominent . .the triangle signifies the smallest value for each experiment .the four models are : tgw , truncated g - wishart prior on the precision matrix for the spatial random effects ; gw , g - wishart prior on the precision matrix for the spatial random effects ; bym , convolution model with independent and icar random effects ; icar , only icar random effects .all models show increased ramse with increased spatial discontinuities ( large m ) and increased ramse with smaller expected counts .the tgw prior performs the best in six out of nine scenarios with the greatest benefit in the larynx , experiment . ]while the truncated g - wishart and g - wishart priors for the spatial covariance appear advantageous in this simulation study , there is little information in a single sample for estimating the full covariance matrix .figure [ fig : prvpostuniv ] shows that the posterior distributions of the elements of the chokesly square root are nearly identical to the prior distributions .this suggests that prior parameter choice plays a substantial role in the results from the tgw and gw models .furthermore , the tgw and gw models should struggle when the risk surface is smoothly varying and the degree of smoothness is common across the study region .table 2 in the supplementary material shows that the convolution model outperforms the tgw and gw models when there is no spatial association ( the log relative risks are generated independently ) and when the underlying risk surface is smooth ( the log relative risks are generated directly from the icar prior ) .in general , the tgw and gw results are comparable with the convolution model when the expected counts were larger and there was some spatial structure in the risk surface .however , with large expected counts ( e.g. , the lung cancer scenario ) , most reasonable methods will perform adequately .in this section , we use the truncated g - wishart prior in a multivariate disease mapping context using cancer incidence data from the washington state cancer registry . let be a matrix of incidence for cancers in each county in washington state in 2010 .these cancers have the largest incidence across the state in 2010 .the expected counts are calculated separately for each cancer using internal standardization based on sex and 5-year age bands .the standardized incidence ratios ( sirs = ) for these data are between and , and the range of the empirical correlations between the sirs of the different cancers ( not taking into account spatial dependence ) is . just over of the countsare under , but we do not treat small counts as missing in this analysis .we use cross - validation to compare the model in section 3.4 to models using the g - wishart prior and using the proper car form for .we compare different choices for the prior on and two choices for the prior on . for the truncated g - wishart and g - wishart priors on , we set and , where the prior on is the same as in section 3.3 .the mcar prior on is simply .for both the wishart and the g - wishart priors on , we set and .we randomly split all observations into bins and create data sets , each with one bin of counts held out .we impute the missing counts as part of the mcmc and compare the models based on average predictive squared bias ( ) and average predictive variance ( var ) .let be the predicted value under model , be the variance of the posterior predictive distribution , and be the observed count .the comparison criteria are the results ( based on running each mcmc for 200,000 iterations ) are given in table [ comparison ] .the truncated g - wishart model with a g - wishart prior on performs best in terms of bias , and the truncated g - wishart model with a wishart prior on performs best in terms of predictive variance . using the truncated g - wishart prior for the spatial precision matrix improves over the g - wishart prior for both choices of prior for .the mcar model is the second best model in terms of mse ( the sum of and var ) ..ten - fold cross - validation results for the washington state cancer incidence data .the five models use the matrix normal random effects model from section 3.4 .the priors on the precision matrices are : ggm , g - wishart priors on and ; tggm , truncated g - wishart prior on and g - wishart prior on ; full , g - wishart prior on and wishart prior on ; tfull , truncated g - wishart prior on and wishart prior on ; mcar , proper car prior on and wishart prior on . in the ggm and tggm models ,the cancer conditional independence graph is random . in the other three models, is a complete graph . [cols=">,>,>,>,^,^",options="header " , ] the cross - validation results are somewhat sensitive to the choice of prior on . we investigated fixing to or ( the mean of the prior used in ) as well as using a discrete uniform prior on . in some cases , the predictive variance is substantially smaller than the variance in table [ comparison ] , but this comes at the cost of greater bias .the best method in terms of overall mse is still the tggm model where the prior on is discrete uniform with additional values closer to .full cross - validation results for the three additional priors on are in the supplementary material .this article presents a novel extension of the g - wishart prior for the precision matrix of spatial random effects . in a simulation study ,the truncated g - wishart prior is able to better estimate the relative risks when the outcomes are rare ( i.e. , the expected counts are small ) and when the risk surface is not smooth .however , we found that there is not enough information in a single outcome to estimate the spatial correlation structure .the restriction of the g - wishart prior was shown to be advantageous when used in a multivariate disease mapping context with incidence data from the washington state cancer registry .the multivariate model relies on the assumption of separability to estimate the rich correlation structure by pooling information across outcomes .the validity of the separability assumption has been carefully considered for spatiotemporal applications , and alternative , non - separable space time covariance models have been proposed for gaussian processes and gaussian markov random fields . extend the mcar to allow for different spatial autocorrelation parameters for each outcome , yielding non - separable model that is still relatively parsimonious , and further extend the mcar paradigm by including parameters that a directly represent the correlation between different outcomes in neighboring areas .ultimately , these mcar extensions still make an assumption similar to separability in that the correlation between outcomes within a single areas is the same for all areas.=1 as mentioned in section 2.1 , others have approached this problem by directly altering the conditional independence structure .given that these models have been shown to outperform the traditional convolution model in some scenarios and are fairly parsimonious , these methods may be better for univariate outcomes than our tgw model .one direction for future research is to incorporate the locally adaptive car in the matrix variate random effect framework of sections 3.4 and 5 .there are a number of computation issues when using the truncated g - wishart and g - wishart priors .each mcmc run for the univariate truncated g - wishart model in section 4 takes approximately hours to complete on a ghz intel xeon e5 - 2640 processor , and , with the exception of the mcar model , the mcmc for each model in section 5 takes about hours to complete .in contrast , estimating the convolution and icar models from section 4 takes a matter of seconds in inla . we have found that the proposal variance for updates of the cholesky square ( section 3.3 ) and the random effects ( see supplementary material ) must be chosen carefully to avoid poor convergence . in both sections and , we used for updating and for updating . while the computation time for the models detailed here are not prohibitive , they may pose a challenge as we extend to more complicated datasets , such as those including multiple diseases in time and space .r code for the simulation in section 4 and c++ code for the analysis in section 5 are available at http://www.lancaster.ac.uk/staff/smithtr/ngwsource.zip .included here are the expected counts and labeling scheme for section 4 and prototypical data for section 5 .a censored version of the data used in section 5 is available from https://fortress.wa.gov/doh/wscr/wscr/query.mvc/query .dobra , a. , lenkoski , a. , and rodriguez , a. ( 2011 ) .`` bayesian inference for general gaussian graphical models with applications to multivariate lattice data . '' _ journal of the american statistical association _, 106 : 14181433 .gneiting , t. and guttorp , p. ( 2010 ) .`` continuous parameter spatio - temporal processes . '' in : gelfand , a. , diggle , p. , guttorp , p. , and fuentes , m. ( eds . ) , _ handbook of spatial statistics _ , 427436 .crc press .hughes , j. and haran , m. ( 2013 ) .`` dimension reduction and alleviation of confounding for spatial generalized linear mixed models . ''_ journal of the royal statistical society : series b ( statistical methodology ) _ , 75 : 139159 .jin , x. , banerjee , s. , and carlin , b. p. ( 2007 ) .`` order - free co - regionalized areal data models with application to multiple - disease mapping . '' _ journal of the royal statistical society : series b ( statistical methodology ) _ , 69 : 817838 .knorr - held , l. and best , n. ( 2001 ) . `` a shared component model for detecting joint and selective clustering of two diseases_ journal of the royal statistical society : series a ( statistics in society ) _ , 164 : 7385 .lee , d. and mitchell , r. ( 2013 ) .`` locally adaptive spatial smoothing using conditional auto - regressive models . ''_ journal of the royal statistical society : series c ( applied statistics ) _ , 62 : 593608 .mardia , k. and goodall , c. ( 1993 ) .`` spatial - temporal analysis of multivariate environmental monitoring data . '' in : patil , g. and rao , c. ( eds . ) , _ multivariate environmental statistics _ , 347385 .elsevier .roverato , a. ( 2002 ) .`` hyper inverse wishart distribution for non - decomposable graphs and its application to bayesian inference for gaussian graphical models . '' _ scandinavian journal of statistics _, 29 : 391411 .rue , h. , martino , s. , and chopin , n. ( 2009 ) .`` approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations . ''_ journal of the royal statistical society : series b ( statistical methodology ) _ , 71 : 319392 .spiegelhalter , d. , best , n. , carlin , b. , and van der linde , a. ( 2002 ) .`` bayesian measures of model complexity and fit . ''_ journal of the royal statistical society : series b ( statistical methodology ) _ , 64 : 583639 .ts and ad were supported in part by the national science foundation ( dms 1120255 ) .jw was supported by 2r01ca095994 - 05a1 from the national institutes of health .the authors thank the washington state cancer registry for providing the cancer incidence data and the referees for their helpful comments .
we present a bayesian model for area - level count data that uses gaussian random effects with a novel type of g - wishart prior on the inverse variance covariance matrix . specifically , we introduce a new distribution called the truncated g - wishart distribution that has support over precision matrices that lead to positive associations between the random effects of neighboring regions while preserving conditional independence of non - neighboring regions . we describe markov chain monte carlo sampling algorithms for the truncated g - wishart prior in a disease mapping context and compare our results to bayesian hierarchical models based on intrinsic autoregression priors . a simulation study illustrates that using the truncated g - wishart prior improves over the intrinsic autoregressive priors when there are discontinuities in the disease risk surface . the new model is applied to an analysis of cancer incidence data in washington state . ./style / arxiv - ba.cfg , ,
in a typical lattice qcd project the total run - time of code on a supercomputing platform is often measured in months or even years .this means that even a modest improvement in the performance of the code can yield very tangible benefits .there are two aspects to the optimisation of code for parallel machines : single - node optimisation and the minimisation of the overhead incurred by inter - node communications .the former requires that the code be written to take full advantage of the high performance available from todays advanced hardware , the latter is of particular importance on cluster machines , like _ alice _ , where the scalability of code can be a serious problem .experience tells us that the dominant part of a typical lattice qcd code is that implementing the multiplication of a vector by the fermion matrix so it is here that the effort should be made .secondly , the use of hand - coded optimised assembler routines can dramatically improve performance since the programmer can use information about the code which is unavailable to the compiler .the disadvantage with assembler routines is that they are difficult to develop and harder to maintain , in addition to the obvious lack of portability .we address these problems by adopting a metacoding approach ; writing a c++ program to write the assembler code for us .we have developed special software tools to enable this .the first stage in creating the assembler routine is to reduce the computational task to elementary assembler - level abstract instructions , _e.g. _ load a datum from memory into registers , perform arithmetic on the data , cache management , textitetc . in order to write the metacodewe have developed a system of c++ classes and routines which automatically schedule the instructions to hide the instruction latencies as much as possible and automatically manage the register usage . when the metacode written using these routines is compiled and run , the abstract instructions with their arguments are translated into an actual assembly language and written to a file . by basing the toolkit design on an abstract risc isa it should be possible to produce assembler code for any risc machine by changing the architecture - dependent parameters .here we show the results on _ alice _ , a cluster of compaq ds10 servers which have a 616 mhz , 4-way superscalar alpha 21264 processor with a 64 kb 2-way set - associative level 1 ( on - chip ) data cache and a 2 mb level 2 ( off - chip ) cache .an advantage of the metacode toolkit as that it permits a large degree of flexibility in writing various assembler kernels ; different approaches can be tried and compared , and the kernels can be rewritten to adapt to changes in the action or algorithm .figure [ fmm ] shows the improvement , over a wide range of lattice volumes , in the performance of the wilson matrix multiplication routine when written with assembler kernels over that of the original implementation in c. to demonstrate the effect in a more realistic environment , the inversion of the wilson matrix using bicgstab is shown in figure [ solver ] .wilson matrix bicgstab , comparing single ( dashed ) and double ( solid ) precision . ]_ alice _ is clustered using parastation 3 over 64bit/33mhz myrinet .our code uses mpich 1.2.3 to do the communications .we test the multinode performance of the bicgstab solver on a lattice running on = 1 , 2 , 4 , 8 and 16 nodes arranged in a 1-dimensional ( ) grid and a 2-dimensional ( square ) grid .we use a standard metric of parallel performance : = our original implementation used a conventional array ordering for all the fields , where each lattice site with coordinates is numbered where is the size of the local lattice in directon .this is illustrated in figure [ layout ] ( left ) which shows that while the data along the boundary in one direction is contiguous , in the second direction it is strided .investigations into the performance of our mpi communications suggest that the communication of strided data introduces an overhead of at least 20% compared to contiguous data .this explains the poor scaling of the solver on a 2-dimensional grid shown in figure [ oldspeedup ] .the scaling on the 1-dimensional grid suffers from the increasingly unfavourable surface - to - volume ratio of the local lattice .speedup of the bicgstab solver with the original data layout . ]the solution to these problems appears to be to rearrange the data layout so that the sites on the lattice boundaries are ordered in a contiguous fashion , illustrated in figure [ layout ] ( right ) .( 3,1.2 ) ( .1 , 0 ) ( 2 , 3.2 ) ( 0,0 ) illustration of the old ( left ) and new ( right ) data layout ; the shaded areas show data on the boundary which is communicated.,title="fig : " ] ( 1.5,0 ) ( 2 , 3.2 ) ( 0,0 ) illustration of the old ( left ) and new ( right ) data layout ; the shaded areas show data on the boundary which is communicated.,title="fig : " ] separating the boundary and interior sites in this way has the additional advantage that computation can proceed on the interior sites while the boundary sites are waiting for a non - blocking communication to finish .figure [ newspeedup ] shows that using this new data layout greatly improves the speedup of the solver .the new data layout does not adversely affect single node performance .speedup of the bicgstab solver with the new datat layout . ]we have introduced a flexible software toolkit [ 1 ] which can successfully generate optimised assembler routines for performance - critical parts of our lattice qcd code . on a single nodewe see a 100150% improvement in the wilson matrix solver performance at single precision and 50100% at double precision .we demonstrate that good scaling performance can be achieved on _ alice _ if the data layout and communication strategy is carefully adapted to suit the communication needs .thanks to dr . p. boyle for inspiration and information and to the _ alice_ team .i acknowledge the financial support provided through the european community s human potential programme under contract hprn - ct-2000 - 00145 , hadrons / lattice qcd .\1 . ` www.theorie.physik.uni-wuppertal.de/ computerlabor / alice / akmt.phtml `
we present results for the performance of qcd code on _ alice _ , the alpha - linux cluster engine at wuppertal . we describe the techniques employed to optimise the code , including the metaprogramming of assembler kernels , the effects of data layout and an investigation into the overheads incurred by the communication .
since the seminal works on the small - world phenomenon by watts and strogatz and the scale - free property by barabsi and albert , the studies of complex networks have attracted a lot of interests within the physics community .one of the ultimate goals of the current studies on complex networks is to understand and explain the workings of the systems built upon them .the previous works about epidemic spreading in scale - free networks present us with completely new epidemic propagation scenarios that a highly heterogeneous structure will lead to the absence of any epidemic threshold ( see the review papers and the references therein ) .these works mainly concentrate on the susceptible - infected - susceptible ( sis ) and susceptible - infected - removed ( sir ) models .however , many real epidemic processes can not be properly described by the above two models .for example , in many technological communication networks , each node not only acts as a communication source and sink , but also forwards information to others . in the process of broadcasting , each node can be in two discrete states , either _ received _ or _ unreceived_. a node in the received state has received information and can forward it to others like the infected individual in the epidemic process , while a node in the unreceived state is similar to the susceptible one .since the node in the received state generally will not lose information , the so - called susceptible - infected ( si ) model is more suitable for describing the above dynamical process .another typical situation where the si model is more appropriate than sis and sir models is the investigation of the dynamical behaviors in the very early stage of epidemic outbreaks when the effects of recovery and death can be ignored .the behaviors of the si model are not only of theoretical interest , but also of practical significance beyond the physics community .however , this has not been carefully investigated thus far .very recently , barthlemy _ et al . _ studied the si model in barabsi - albert ( ba ) scale - free networks , and found that the density of infected nodes , denoted by , grows approximately in the exponential form , , where the time scale is proportional to the ratio between the second and the first moments of the degree distribution , .since the degree distribution of the ba model obeys the power - law form with , this epidemic process has an infinite spreading velocity in the limit of infinite population . following a similar process on _ random apollonian networks _ and the barrat - barthlemy - vespignani networks , zhou _ et al ._ investigated the effects of clustering and weight distribution on si epidemics . and by using the theory of branching processes , vazquez obtained a more accurate solution of , including the behaviors with large .the common assumption in all the aforementioned works is that each node s potential infection - activity ( infectivity ) , measured by its possibly maximal contribution to the propagation process within one time step , is strictly equal to its degree .actually , only the contacts between susceptible and infected nodes have possible contributions in epidemic processes .however , since in a real epidemic process , an infected node usually does not know whether its neighbors are infected , the standard network si model assumes that each infected node will contact every neighbor once within one time step , thus the infectivity is equal to the node degree .the node with very large degree is called a _hub _ in network science , while the node with great infectivity in an epidemic contact network is named _ superspreader _ in the epidemiological literature .all the previous studies on si network model have a basic assumption , that is , .this assumption is valid in some cases where the hub node is much more powerful than the others .however , there are still many real spreading processes , which can not be properly described by this assumption .some typical examples are as follows . in the broadcasting process , the forwarding capacity of each node is limited .especially , in wireless multihop ad hoc networks , each node usually has the same power thus almost the same forwarding capacity . in epidemic contact networks , the hub node has many acquaintances ; however , he / she could not contact all his / her acquaintances within one time step .analogously , although a few individuals have hundreds of sexual partners , their sexual activities are not far beyond a normal level due to the physiological limitations . in some email service systems , such as the gmail system schemed out by google , one can be a client only if he / she received at least one invitation from some existing clients . and after he/ she becomes a client , he / she will have the ability to invite others . however , the maximal number of invitations he / she can send per a certain period of time is limited . in network marketing processes , the referral of a product to potential consumers costs money and time ( e.g. a salesman has to make phone calls to persuade his social surrounding to buy the product ) .thus , generally speaking , the salesman will not make referrals to all his acquaintances .in addition , since the infectivity of each node is assigned to be equal to its degree , one can not be sure which ( the power - law degree distribution , the power - law infectivity distribution , or both ) is the main reason that leads to the virtually infinite propagation velocity of the infection . [ 0.8 ] vs time , where . the black and red curves result from the standard si network model and the present model .the numerical simulations are implemented based on the ba network of size and with average degree .the spreading rate is given as , and the data are averaged over 5000 independent runs.,title="fig : " ] [ 0.8 ] vs time in normal ( a ) and single - log ( b ) plots . the black solid , red dot , green dash and blue dash - dot curves correspond to and 0.0001 , respectively . in single - log plot ( b ) , the early behavior of can be well fitted by a straight line , indicating the exponential growth of infected population .the inset shows the rescaled curves .the four curves for different collapse to one curve in the new scale .the numerical simulations are implemented based on a ba network of size and with average degree , and the data are averaged over 5000 independent runs.,title="fig : " ] [ 0.8 ] vs time in normal ( a ) and single - log ( b ) plots .the black solid , red dot , green dash and blue dash - dot curves correspond to and 0.0001 , respectively . in single - log plot ( b ) , the early behavior of can be well fitted by a straight line , indicating the exponential growth of infected population .the inset shows the rescaled curves .the four curves for different collapse to one curve in the new scale .the numerical simulations are implemented based on a ba network of size and with average degree , and the data are averaged over 5000 independent runs.,title="fig : " ]different from the previous works , here we investigate the si process on scale - free networks with identical infectivity . in our model, individuals can be in two discrete states , either susceptible or infected .the total population ( i.e. the network size ) is assumed to be constant ; thus , if and are the numbers of susceptible and infected individuals at time , respectively , then denote by the _ spreading rate _ at which each susceptible individual acquires infection from an infected neighbor during one time step .accordingly , one can easily obtain the probability that a susceptible individual will be infected at time step to be where denotes the number of contacts between and the infected individuals at time . for small , one has in the standard si network model , each infected individual will contact all its neighbors once at each time step , thus the infectivity of each node is defined by its degree and is equal to the number of its infected neighbors at time . in the present model , we assume every individual has the same infectivity ,in which , at every time step , each infected individual will generate contacts where is a constant .multiple contacts to one neighbor are allowed , and contacts between two infected ones , although having no effect on the epidemic dynamics , are also counted just like the standard si model .the dynamical process starts by selecting one node randomly , assuming it is infected .in the standard si network model , the average infectivity equals the average degree .therefore , in order to compare the proposed model with the standard one , we set . as shown in fig .1 , the dynamical behaviors of the present model and the standard one are clearly different : the velocity of the present model is much less than that of the standard model . in the following discussions , we focus on the proposed model . without loss of generality , we set .denote by the density of infected -degree nodes .based on the mean - field approximation , one has \sum_{k'}\frac{1}{k'}\frac{k'p(k')i_{k'}(t)}{\sum_{k''}k''p(k'')},\ ] ] where denotes the probability that a randomly selected node has degree .the factor accounts for the probability that one of the infected neighbors of a node , with degree , will contact this node at the present time step .note that the infected density is given by so eq . ( 4 ) can be rewritten as (t).\ ] ] manipulating the operator on both sides , and neglecting terms of order , one obtains the evolution behavior of as follows : where is a constant independent of the power - law exponent . [ 0.8 ] vs time for different .the black squares , red circles , blue up - triangles , green down - triangles , and pink diamonds ( from up to down ) denote the cases of and 4.0 , respectively .the numerical simulations are implemented based on the scale - free configuration network model .the networks are of size and with average degree , the spreading rate is given as , and the data are averaged over 10000 independent runs.,title="fig : " ] [ 0.42 ] va time with different vaccinating ranges .figure 4a and 4b show the results of targeted immunization for the standard si process in normal and single - log plots , respectively .correspondingly , figure 4c and 4d display the results for the present model .in all the four panels , the black solid , red dash , blue dot and green dash - dot curves represent the cases of , 0.001 , 0.005 and 0.01 , respectively .the numerical simulations are implemented based on a ba network of size and with average degree , the spreading rate is given as , and the data are averaged over 5000 independent runs . for comparison ,the infectivity of the present model is set as .,title="fig : " ] [ 0.42 ] va time with different vaccinating ranges .figure 4a and 4b show the results of targeted immunization for the standard si process in normal and single - log plots , respectively .correspondingly , figure 4c and 4d display the results for the present model . in all the four panels , the black solid , red dash ,blue dot and green dash - dot curves represent the cases of , 0.001 , 0.005 and 0.01 , respectively .the numerical simulations are implemented based on a ba network of size and with average degree , the spreading rate is given as , and the data are averaged over 5000 independent runs . for comparison ,the infectivity of the present model is set as .,title="fig : " ] [ 0.42 ] va time with different vaccinating ranges .figure 4a and 4b show the results of targeted immunization for the standard si process in normal and single - log plots , respectively .correspondingly , figure 4c and 4d display the results for the present model . in all the four panels ,the black solid , red dash , blue dot and green dash - dot curves represent the cases of , 0.001 , 0.005 and 0.01 , respectively .the numerical simulations are implemented based on a ba network of size and with average degree , the spreading rate is given as , and the data are averaged over 5000 independent runs . for comparison ,the infectivity of the present model is set as .,title="fig : " ] [ 0.42 ] va time with different vaccinating ranges .figure 4a and 4b show the results of targeted immunization for the standard si process in normal and single - log plots , respectively .correspondingly , figure 4c and 4d display the results for the present model . in all the four panels ,the black solid , red dash , blue dot and green dash - dot curves represent the cases of , 0.001 , 0.005 and 0.01 , respectively .the numerical simulations are implemented based on a ba network of size and with average degree , the spreading rate is given as , and the data are averaged over 5000 independent runs . for comparison ,the infectivity of the present model is set as .,title="fig : " ] [ 0.8 ] vs time for different . in figure 5(a ) , the black solid , blue dot , magenta dash - dot , red dash and green dash - dot - dot curves correspond to , -1 , -2 , 1 and 2 , respectively . in figure 5(b ) , the black solid , red dash , blue dot , green dash - dot , magenta dash - dot - dot and cyan short - dash curves , from up to down , correspond to , 0.1 , 0.2 , 0.3 , 0.4 and 0.5 , respectively . in figure 5(c ) , the black solid , red dash , blue dot , green dash - dot , magenta dash - dot - dot and cyan short - dash curves correspond to , -0.1 , -0.2 , -0.3 , -0.4 and -0.5 , respectively .the numerical simulations are implemented based on the extensional ba network of size and with average degree , the spreading rate is given as and the data are averaged over 5000 independent runs.,title="fig : " ] [ 0.8 ] vs time for different . in figure 5(a ) , the black solid , blue dot , magenta dash - dot , red dash and green dash - dot - dot curves correspond to , -1 , -2 , 1 and 2 , respectively . in figure 5(b ) , the black solid , red dash , blue dot , green dash - dot , magenta dash - dot - dot and cyan short - dash curves , from up to down , correspond to , 0.1 , 0.2 , 0.3 , 0.4 and 0.5 , respectively . in figure 5(c ) , the black solid , red dash , blue dot , green dash - dot , magenta dash - dot - dot and cyan short - dash curves correspond to , -0.1 , -0.2 , -0.3 , -0.4 and -0.5 , respectively .the numerical simulations are implemented based on the extensional ba network of size and with average degree , the spreading rate is given as and the data are averaged over 5000 independent runs.,title="fig : " ] [ 0.8 ] vs time for different . in figure 5(a ) , the black solid , blue dot , magenta dash - dot , red dash and green dash - dot - dot curves correspond to , -1 , -2 , 1 and 2 , respectively . in figure 5(b ) , the black solid , red dash , blue dot , green dash - dot , magenta dash - dot - dot and cyan short - dash curves , from up to down , correspond to , 0.1 , 0.2 , 0.3 , 0.4 and 0.5 , respectively . in figure 5(c ) , the black solid , red dash , blue dot , green dash - dot , magenta dash - dot - dot and cyan short - dash curves correspond to , -0.1 , -0.2 , -0.3 , -0.4 and -0.5 , respectively .the numerical simulations are implemented based on the extensional ba network of size and with average degree , the spreading rate is given as and the data are averaged over 5000 independent runs.,title="fig : " ] in fig .2 , we report the simulation results of the present model for different spreading rates ranging from 0.0001 to 0.01 . the curves vs can be well fitted by a straight line in single - log plot for small with slope proportional to ( see also the inset of fig .2b , where the curves for different values of collapse to one curve in the time scale ) , which strongly supports the analytical results .furthermore , based on the scale - free configuration model , we investigated the effect of network structure on epidemic behaviors .different from the standard si network model , which is highly affected by the power - law exponent , as shown in fig .3 , the exponent here has almost no effects on the epidemic behaviors of the present model . in other words , in the present model, the spreading rate , rather than the heterogeneity of degree distribution , governs the epidemic behaviors .an interesting and practical problem is whether the epidemic propagation can be effectively controlled by vaccination aiming at part of the population .the most simple case is to select some nodes completely randomly , and then vaccinate them . by applying the percolation theory , this casecan be exactly solved .the corresponding result shows that it is not an efficient immunization strategy for highly heterogeneous networks such as scale - free networks .recently , some efficient immunization strategies for scale - free networks are proposed . on the one hand , if the degree of each node can not be known clearly , an efficient strategy is to vaccinate the random neighbors of some randomly selected nodes since the node with larger degree has greater chance to be chosen by this double - random chain than the one with small degree . on the other hand ,if the degree of each node is known , the most efficient immunization strategy is the so - called _ targeted immunization _ , wherein the nodes of highest degree are selected to be vaccinated ( see also a similar method in ref . ) . here, we compare the performance of the targeted immunization for standard si model and the present model . to implement this immunization strategy , a fraction of population having highest degree , denoted by ,are selected to be vaccinated .that is to say , these nodes will never be infected but the contacts between them and the infected nodes are also counted . clearly , in both the two models , the hub nodes have more chances to receive contacts from their infected neighbors , thus this targeted immunization strategy must slow down the spreading velocity . in fig . 4a and fig .4b , we report the simulation results for the standard si model .the spreading velocity remarkably decreases even only a small fraction , , of population get vaccinated , which strongly indicate the efficiency of the targeted immunization .relatively , the effect of the targeted immunization for the present model is much weaker ( see fig . 4c and fig .the difference is more obvious in the single - log plot ( see fig . 4b and fig .4d ) : the slope of the curve , which denotes the time scale of the exponential term that governs the epidemic behaviors , sharply decreases even only a small amount of hub nodes are vaccinated in standard si process while changes slightly in the present model .as mentioned in the sec . 4 , previous studies about network epidemic processes focus on how to control the epidemic spreading , especially for scale - free networks .contrarily , few studies aim at accelerating the epidemic spreading process . however , a fast spreading strategy may be very useful for enhancing the efficiency of network broadcasting or for making profits from network marketing . in this section, we give a primary discussion on this issue by introducing and investigating a simple fast spreading strategy .since the whole knowledge of network structure may be unavailable for large - scale networks , here we assume only local information is available . in our strategy ,at every time step , each infected node will contact its neighbor ( in the broadcasting process , it means to forward a message to node ) at a probability proportional to , where denotes the degree of .there are two ingredients simultaneously affect the performance of the present strategy . on the one hand ,the strategy preferring large - degree node ( i.e. the strategy with ) corresponds to shorter average distance in the path searching algorithm , thus it may lead to faster spreading . on the other hand , to contact an already infected node ( i.e. to forward a message to a node having already received this message ) has no effects on the spreading process , and the nodes with larger degrees are more easily to be infected according to eq .( 6 ) in the case of .therefore , the strategy with will bring many redundant contacts that may slow down the spreading . for simplicity , we call the former the _ shorter path effect _ ( spe ) , and the latter the _ redundant contact effect _ ( rce ). figure 5(a ) shows the density of infected individuals as a function of for different . clearly , due to the competition between the two ingredients , spe and rce , the strategies with too large ( e.g. ) or too small ( e.g. ) are inefficient comparing with the unbiased one with .the cases when is around zero are shown in figs .5(b ) and 5(c ) . in fig .5(b ) , one can see that the rce plays the major role in determining the epidemic velocity when ; that is , larger leads to slower spreading . as shown in fig .5(c ) , the condition is much more complex when : in the early stage , the unbiased strategy seems better ; however , as time goes on , it is exceeded by the others .almost all the previous studies about the si model in scale - free networks essentially assume that the nodes of large degrees are not only dominant in topology , but also the superspreaders .however , not all the si network processes can be appropriately described under this assumption .typical examples include the network broadcasting process with a limited forwarding capacity , the epidemics of sexually transmitted diseases where all individuals sexual activities are pretty much the same due to the physiological limitations , the email service systems with limited ability to accept new clients , the network marketing systems where the referral of products to potential consumers costs money and time , and so on .inspired by these practical requirements , in this article we have studied the behaviors of susceptible - infected epidemics on scale - free networks with identical infectivity .the infected population grows in an exponential form in the early stage .however , different from the standard si network model , the epidemic behavior is not sensitive to the power - law exponent , but is governed only by the spreading rate .both the simulation and analytical results indicate that it is the heterogeneity of infectivities , rather than the heterogeneity of degrees , governs the epidemic behaviors .further more , we compare the performances of targeted immunization on the standard si process and the present model . in this standard si process ,the spreading velocity decreases remarkably even only a slight fraction of population are vaccinated .however , since the infectivity of the hub nodes in the present model is just equal to that od the small - degree node , the targeted immunization for the present model is much less efficient .we have also investigated a fast spreading strategy when only local information is available .different from previous reports about some relative processes taking place on scale - free networks , we found that the strategy preferring small - degree nodes is more efficient than those preferring large nodes .this result indicates that the redundant contact effect is more important than the shorter path effect .this finding may be useful in practice .very recently , some authors suggested using a quantity named _ saturation time _ to estimate the epidemic efficiency , which means the time when the infected density , , firstly exceeds 0.9 . under this criterion ,the optimal value of leading to the shortest saturation time is -0.3 .some recent studies on network traffic dynamics show that the networks will have larger throughput if using routing strategies preferring small - degree nodes .it is because this strategy can avoid possible congestion occurring at large - degree nodes .although the quantitative results are far different , there may exist some common features between network traffic and network epidemic .we believe that our work can further enlighten the readers on this interesting subject .this work was partially supported by the national natural science foundation of china under grant nos .70471033 , 10472116 , 10532060 , 70571074 and 10547004 , the special research founds for theoretical physics frontier problems under grant no .a0524701 , and specialized program under the presidential funds of the chinese academy of science .watts1998 d. j. watts , and s. h. strogatz , nature * 393 * , 440 ( 1998 ) .barabsi , and r. albert , science * 286 * , 509 ( 1999 ) .r. albert , and a. -l .barabsi , rev .* 74 * , 47 ( 2002 ) . s. n. dorogovtsev , and j. f. f. mendes , adv .phys . * 51 * , 1079 ( 2002 ) .m. e. j. newman , siam review * 45 * , 167 ( 2003 ) .s. boccaletti , v. latora , y. moreno , m. chavez , and d. -u .hwang , phys .rep . * 424 * , 175 ( 2006 ) .r. pastor - satorras , and a. vespignani , _ epidemics and immunization in scale - free networks_. in : s. bornholdt , and h. g. schuster ( eds . ) _ handbook of graph and networks _ , wiley - vch , berlin , 2003 .t. zhou , z. -q .fu , and b. -h .wang , prog .* 16 * , 452 ( 2006 ) .r. pastor - satorras , and a. vespignani , phys .lett . * 86 * , 3200 ( 2001 ) .r. pastor - satorras , and a. vespignani , phys .e * 63 * , 066117 ( 2001 ) .r. m. may , and a. l. lloyd , phys .e * 64 * , 066112 ( 2001 ) .y. moreno , r. pastor - satorras , and a. vespignani , eur .j. b * 26 * , 521 ( 2002 ) .a. s. tanenbaum , _ computer networks _( prentice hall press , 1996 ) .w. krause , j. scholz , and m. greiner , physica a * 361 * , 707 ( 2006 ) .j. park , and s. sahni , ieee trans .computers * 54 * , 1081 ( 2005 ) .h. a. harutyunyan , and b. shao , j. parallel & distributed computing * 66 * , 68 ( 2006 ) .m. barthlemy , a. barrat , r. pastor - satorras , and a. vespignani , phys .92 * , 178701 ( 2004 ) .m. barthlemy , a. barrat , r. pastor - satorras , and a. vespignani , j. theor . biol . * 235 * , 275 ( 2005 ) .t. zhou , g. yan , and b. -h .wang , phys .e * 71 * , 046141 ( 2005 ) .gu , t. zhou , b. -h .wang , g. yan , c. -p .zhu , and z. -q .contin . discret .algorithm * 13 * , 505 ( 2006 ) .zhang , l. -l .rong , and f. comellas , physica a * 364 * , 610 ( 2006 ) .a. barrat , m. barthlemy , and a. vespignani , phys .lett . * 92 * , 228701 ( 2004 ) .a. barrat , m. barthlemy , and a. vespignani , phys .e * 70 * , 266149 ( 2004 ) .g. yan , t. zhou , j. wang , z. -q .fu , and b. -h .wang , chin .* 22 * , 510 ( 2005 ) .a. vazquez , phys .* 96 * , 038702 ( 2006 ) .s. bassetti , w. e. bischoff , and r. j. sherertz , emerging infectious diseases * 11 * , 637 ( 2005 ) .m. small , and c. k. tse , physica a * 351 * , 499 ( 2005 )bai , t. zhou , and b. -h .wang , arxiv : physics/0602173 .p. gupta , and p. r. kumar , ieee trans .theory * 46 * , 388 ( 2000 ) .f. liljeros , c. r. rdling , l. a. n. amaral , h. e. stanley , and y. berg , nature * 411 * , 907 ( 2001 ) . f. liljeros , c. r. rdling , and l. a. n. amaral , microbes and infection * 5 * , 189 ( 2003 ) .a. schneeberger , c. h. mercer , s. a. j. gregson , n. m. ferguson , c. a. nyamukapa , r. m. anderson , a. m. johnson , and g. p. garnett , sexually transmitted diseases * 31 * , 380 ( 2004 ) .see the details about gmail system from the web site http://mail.google.com/mail/help/intl/en/about.html .b. j. kim , t. jun , j. y. kim , and m. y. choi , physica a * 360 * , 493 ( 2006 ) . m. e. j. newman , s. h. strogatz , and d. j. watts , phys . rev .e * 64 * , 026118 ( 2001 ) .f. chung , and l. lu , annals of combinatorics * 6 * , 125 - 145 ( 2002 ) .x. li , and x. -f .wang , ieee trans .automatic control * 51 * , 534 ( 2006 ) .r. cohen , k. erez , d. ben - avraham , and s. havlin , phys .* 85 * , 4626 ( 2000 ) .d. s. callway , m. e. j. newman , s. h. strogatz , and d. j. watts , phys .lett . * 85 * , 5468 ( 2000 ) .r. huerta , and l. s. tsimring , phys .e * 66 * , 056115 ( 2002 ) .r. cohen , s. havlin , and d. ben - avraham , phys .rev . lett . * 91 * , 247901 ( 2003 ) .r. pastor - satorras , and a. vespignani , phys .e * 65 * , 036104 ( 2002 ) . n. madar , t. kalisky , r. cohen , d. ben - avraham , and s. havlin , eur .j. b * 38 * , 269 ( 2004 ) .z. dezs , and a. -l .barabsi , phys .e * 65 * , 055103 ( 2002 ) .l. a. adamic , r. m. lukose , a. r. puniyani , and b. a. huberman , phys .e * 64 * , 046135 ( 2001 ) .kim , c. n. yoon , s. k. han , and h. jeong , phys .e * 65 * , 027103 ( 2002 ) . c. -p .zhu , s. -j .xiong , y. -t .tian , n. li , and k. -s .jiang , phys .lett . * 92 * , 218702 ( 2004 ) .j. saramaki , and k. kaski , j. theor . biol . *234 * , 413 ( 2005 ) . c. -y .yin , b. -h .wang , w. -x .wang , t. zhou , and h. -j .yang , phys . lett .a * 351 * , 220 ( 2006 ) .wang , b. -h .wang , c. -y .yin , y. -b .xie , and t. zhou , phys .e * 73 * , 026111 ( 2006 ) .g. yan , t. zhou , b. hu , z. -q .fu , and b. -h .wang , phys .e * 73 * , 046108 ( 2006 ) .
in this article , we proposed a susceptible - infected model with identical infectivity , in which , at every time step , each node can only contact a constant number of neighbors . we implemented this model on scale - free networks , and found that the infected population grows in an exponential form with the time scale proportional to the spreading rate . further more , by numerical simulation , we demonstrated that the targeted immunization of the present model is much less efficient than that of the standard susceptible - infected model . finally , we investigated a fast spreading strategy when only local information is available . different from the extensively studied path finding strategy , the strategy preferring small - degree nodes is more efficient than that preferring large - degree nodes . our results indicate the existence of an essential relationship between network traffic and network epidemic on scale - free networks .
given its success for finite - state systems , the model checking approach to verification has been extended to various models based on automata , and including features such as time , probability and infinite data structures .these models allow one to represent software systems more faithfully , by representing timing constraints , randomization , and _e.g. _ unbounded call stacks . at the same time, they often offer the possibility to consider _ quantitative _ verification questions , such as whether the best execution time meets a requirement , or whether the system is reliable with high probability .quantitative verification is notably hard for infinite - state systems , and often requires the development of techniques dedicated to each class of models .a decade ago , abdulla , ben henda and mayr introduced the concept of decisiveness for denumerable markov chains .formally , a markov chain is decisive w.r.t .a set of states if runs almost - surely reach or a state from which can no longer be reached .the concept of decisiveness thus forbids some weird behaviours in denumerable markov chains , and allows one to lift most good properties from finite markov chains to denumerable ones , and therefore to adapt existing verification algorithms to infinite - state models .in particular , assuming decisiveness enables the quantitative model checking of ( repeated ) reachability properties , by providing an approximation scheme , which is guaranteed to terminate for any given precision for decisive markov chains .decisiveness also elegantly subsumes other concepts such as the existence of finite attractors , or coarseness .decisive markov chains however are not general enough to represent stochastic real - time systems . indeed ,to faithfully model time in real - time systems , it is adequate to use dense time , that is , timestamps of events are taken from a dense domain ( like the set of rational or of real numbers ) .this source of infinity for the state - space of the system is particularly difficult to handle : the state - space is non - denumerable ( even continuous ) , the branching in the transition system is also non - denumerable , _etc_. for those reasons , stochastic real - time systems do not fit in the framework of decisive markov chains of . on the other hand , standard analysis techniques for non -stochastic real - time systems also can not be easily adapted to stochastic real - time systems . traditionally , these techniques rely on the design of appropriate finite abstractions , which preserve good properties of the original model .a prominent example of such an abstraction is that of the region automaton for timed automata .however , these abstractions most often do not preserve quantitative properties and , in the context of stochastic systems they may be too coarse already for the evaluation of the probability of properties as simple as reachability properties . a general framework to analyse a large class of stochastic real - time systems is thus lacking .in this article , we face this issue and provide a framework to perform the analysis of general _ stochastic transition systems _ ( stss for short ) .to do so , we generalize the main concepts of ( such as decisiveness , attractors ) , and standard notions for markov chains ( like fairness ) .stss are purely stochastic markov processes , that is , markov chains with a continuous state - space .note that , while this journal version builds on the conference paper , we choose here to phrase our results for time - homogeneous and markovian models . as mentioned in ,the markovian assumption is not a severe restriction since many apparently non markovian processes can be recast to markovian models by changing the state space . in our opinion , this choice furthermore enabled the design of a richer and more elegant theory ( compared to ) .our first contribution is to define various notions of decisiveness ( inherited from ) , notions of fairness and of attractors in the general context of stss . to complete the semantical picture ,we explicit the relationships between these notions , in the general case of stss , and also when restricting to denumerable markov chains . decisiveness or the existence of attractors will be later exploited to analyze properties for stss . as mentioned earlier, the analysis of real - time systems often requires the development of abstractions . as a second contribution ,we define a notion of abstraction , which makes sense for stss .concepts of soundness and completeness are naturally defined for those abstractions , and general transfer properties are given , which will be central to several verification algorithms on stss .the special case of denumerable abstractions is discussed , since it allows one to transfer more properties from the abstract system to the concrete one .our third contribution focuses on the qualitative model checking problem for various properties .in particular , we extend the results of and show that , under some decisiveness assumptions , the almost - sure model checking of ( repeated ) reachability properties reduces to a simpler problem , namely to a reachability problem with probability .we advocate that this reduction simplifies the problem : in countable models , the -reachability amounts to the non existence of a path , in the underlying non - probabilistic system ; beyond countable models , checking that a reachability property is satisfied with probability amounts to exhibiting a somehow regular set of executions with positive measure . beyond repeated reachability properties , we use abstractions to design algorithms for the qualitative model - checking problem of arbitrary -regular properties , in case the sts admits an abstraction with the finite attractor property .the latter contribution is completely new compared to the original results of and our conference paper .it is inspired by a procedure of for probabilistic lossy channel systems , a special class of denumerable markov chains with a finite attractor .our fourth contribution is the design of generic approximation procedures for the quantitative model - checking problem , inspired by the path enumeration algorithm of purushothoman iyer and narashima . under some decisiveness assumptions , we prove that these approximation schemes are guaranteed to terminate . assuming the stss can be represented finitely and enjoy some smooth effectiveness properties , one derives approximation algorithms : one can approximate , up to a desired ( arbitrary ) precision , the probability of ( repeated ) reachability properties .note that without these effectiveness properties , one can not hope for algorithms , and this motivates our above formulation of `` procedures '' .further , once again via the use of an abstraction with the finite attractor property , we design an approximation algorithm for -regular properties ; this algorithm makes use of the attractor of the abstract model to convert the quantitative analysis of an -regular property into the quantitative verification of a reachability property in the concrete model .up to our knowledge , this approach is completely new , and provides an interesting framework for quantitative verification of stochastic systems .our last contribution consists in instantiating our framework with high - level stochastic models , stochastic timed automata ( sta ) and generalized semi - markov processes ( gsmp ) , which are two models combining dense - time and probabilities .this allows us to derive decidability and approximability results for the verification of those models .some of these results were known from the literature , _e.g. _ the ones from , but our generic approach permits to view them in a unified framework , and to obtain them with less effort .we also derive interesting new approximability results for sta and gsmps .in particular , the approximability results derived from this paper for sta are far more general than those obtained using an _ ad - hoc _ approach in .the paper concludes with an overview of our main results , organized as a travel guide to stss : it summarizes the relationships between all notions , and provides the reader recipes to analyze stss .the most technical proofs are postponed to the appendix .pointers are given when relevant .in this section , we define the general model of stochastic transition systems , which are somehow markov chains with a continuous state - space .this model corresponds to labelled markov processes of with a single action ( hence removing non - determinism ) .we then define several probability measures , on infinite paths , but also on the state - space , which give different point - of - views over the behaviour of the systems .we continue by defining regular measurable events , and end up with defining deterministic muller automata , and technical material for handling properties specified by these automata .given a measurable space ( is a -algebra over ) , we write for the set of probability distributions over . in the sequel ,when the context is clear , we will omit the -algebra and simply write this set as .a _ stochastic transition system _ ( sts ) is a tuple consisting of a measurable analytic space , and ] is measurable , and we write for ) ] ; we then write that is . furthermore : is said ( strongly , persistently ) decisive w.r.t . whenever it is ( strongly , persistently ) decisive w.r.t . from every initial distribution ; we then write that is ( resp . , ) .also , given , we say that is ( strongly , persistently ) decisive w.r.t . from if it is ( , ) for each .we write is ( , ) .we say that that is ( strongly , persistently ) decisive w.r.t . if it is ( , ) for each .we write is ( , ) .intuitively , the ( simple ) decisiveness property says that , almost - surely , either will eventually be visited , or states from which can no more be reached will eventually be visited .it denotes a dichotomy between the behaviours of the sts : there are those behaviours that visit , and those that do not visit , but then visit ; other behaviours have probability to occur . strong decisiveness imposes a similar dichotomy , but between behaviours that visit infinitely often and behaviours that visit .persistent decisiveness refines simple decisiveness , but by looking at an arbitrary horizon .[ example : btildedeci ] let us consider again the sts of example [ example : dmcrandomwalk ] , representing a discrete - time random walk . since the chain is strongly connected , for each , .let us assume that and that , the dirac distribution over state .then it can be shown that for each set of states , and thus , is .however if and then ; but since , we derive that is not . consider now for each , .since , classical results on random walks imply that for each , . and since , we obtain that is not .consider now the sts of example [ example : continuous ] .assume that and that and fix some .we consider . then one can compute .note that here , as time almost - surely always progresses , .it thus follows that is and .the notion of finite attractor has been used in several contexts like probabilistic lossy channel systems ( see _ e.g. _ ) and abstracted in in the context of denumerable markov chains .a finite attractor is a finite set of states which is reached almost - surely from every state of the system .we lift this definition to our context , obviously relaxing the finiteness assumption , since it is very unlikely that systems with a continuous state - space will have finite attractors .since the whole set of states is a trivial attractor , this general definition will prove useful once we are able to define attractors with some finiteness property , which will be done through _ abstractions _ in section [ sec : abstractions ] .let be an initial distribution . is a _-attractor for _ if .further , is an _attractor for _ if it is a -attractor for every .consider the random walk of example [ example : dmcrandomwalk ] and assume again that .for , it can be shown , as stated before , that is a -attractor for .however , for any distribution over naturals greater than , and thus is not a -attractor .on the other hand , if we assume , it is a well - known property of random walks that is reached almost - surely from every state , hence we can infer that any bounded subset of is an attractor ( for every initial distribution ) .attractors are very strong properties of stss , and even in our general context , the following strong property is satisfied .lemmaattractorgf [ lemma : attractorgf ] if is an attractor for then for every initial distribution , let be an attractor for , i.e. for each initial distribution , . towards a contradiction , assume that there is such that .then , .now remember that from the definitions , we have that it follows that there is such that from lemma [ lemma : integration ] , if we write and for each , we get that for each , since and for each , .it can be seen that in this case , for each , .we write .we thus get that which contradicts the fact that is an attractor , hence a -attractor , for .fairness is a standard notion in probabilistic systems , saying that something which is allowed infinitely often should happen infinitely often almost - surely .this can for instance be instantiated in denumerable markov chains as follows : if a state is visited infinitely often , and the probability to move from to is positive , then , almost - surely , infinitely often the state is visited .it is well - known that not all markov chains are fair , but finitely - branching markov chains are fair .fairness can not be lifted directly to continuous state - space stss ( since for two states and , the probability to move from to is likely to be ) .a more careful definition of this notion must be provided for general stss . for , we define , as the set of measurable sets `` from which '' can be reached with positive probability .note that , ideally we would like to define the maximal set that allows one to reach , but the union of all such sets may not be measurable in our general context .let be some initial distribution , and .the sts is _fair w.r.t . from _ , written is , if for every , implies we then write that is . as for decisiveness, we extend this definition to sets , and use similar notations when we relax the fixed initial measure .finally , we say that is _ strongly fair _ whenever it is fair wr.t . from for every and every .[ example : fairness ] consider again the random walk of example [ example : dmcrandomwalk ] . is strongly fair by observing that there is a positive lower bound on the non - zero probabilities to reach any set of states .formally there exists such that for each , for each and for each , .it suffices to choose .[ counterexample : fairness ] consider now the dmc depicted in figure [ figure : fairness ] .consider , and , .it holds that , however , and thus is not .= [ scale=1 ] = [ ptt , draw , circle , minimum size = .9 cm ] ; = [ ptt , circle , minimum size = .9 cm ] ; = [ ->,>=stealth , rounded corners=1pt ] ; ( a1 ) at ( 0,-4 ) ; ( a2 ) at ( 2 , -4 ) ; ( a3 ) at ( 4 , -4 ) ; ( a4 ) at ( 6 , -4 ) ; ( b ) at ( 0 , -1.5 ) ; ( a1 ) ( a2 ) node[midway , below ] ; ( a2 ) ( a3 ) node[midway , below ] ; ( a3 ) ( a4 ) node[midway , below ] ; ( a4 ) ( 8 , -4 ) ; ( a1 ) ( b ) node[midway , right ] ; ( a2.150 ) ( b.300 ) node[midway , right ] ; ( a3.120 ) ( b.330 ) node[midway , right ] ; ( a4.north ) ( b.east ) ; ( b ) to[bend right=45 ] node[midway , left ] ( a1 ) ; in this section , we compare all the notions , and give the precise links between all these notions .we first analyze the general case , and reinforce the results in the case of dmcs .we can establish the following links between the notions of decisiveness and fairness .the first result is straightforward . for each and for each , it holds that ( resp . , ) implies ( resp . , ) , and implies .we also get straightforwardly from the definitions , the following implication .[ lemma : strdec ] for each and for each , it holds that implies , and implies .it then turns out that strong decisiveness and persistent decisiveness are two equivalent notions .[ lemma : equivstrpers ] for each and for each , it holds that is equivalent to .fix and .fix and assume that is , i.e. for each , .we want to show that is , i.e. that , or equivalently that .we have that : hence we get that and thus is and as it holds true for each .now fix again and assume that is , i.e. . from lemma [ lemma : btildeequivfgf ] ( fourth item ) , we get that and it is then straightforward to establish that for each , . we hence deduce that is and thus as it holds true for each . this concludes the proof .now , we have the following equivalences between the decisiveness notions . for each , it holds that all three notions , and are equivalent .fix . from lemmas [ lemma :strdec ] and [ lemma : equivstrpers ] , it remains to prove that or .we prove the last one .we pick and assume that is , i.e. for each , .pick and .we get that b^c \wedge \g[\ge i ] ( \btilde)^c ) & \le & \prob_{\mu_i}^{\calt } ( \g(b^c \cap ( \btilde)^c ) ) \\ & & \text{where , from lemma~\ref{lemma : integration } } \\ & & \text{and from a similar argument as in the proof of lemma~\ref{lemma : attractorgf } } \\ & \le & 0\ \text{since is .}\end{aligned}\ ] ] hence for each , and since it holds true for each and each , we get that is . finally , we show the following links between fairness and decisiveness . for each and for each , it holds that implies , and implies .fix and .assume that is strongly decisive w.r.t . from , that is for each , .we want to prove that for each , for each with , we have that . fix and such that .we can notice that indeed , towards a contradiction , assume that .observe that then , there are such that it follows , from lemma [ lemma : integration ] like seen previously , that there is ( ) , such that and since , we get that hence , and we can apply lemma [ lem : btilde ] ( second item ) to obtain a contradiction .hence , equation holds .we then write : which proves that .then , the implication is immediate since the previous implication holds for any initial distribution .we can summarize the previous implications as follows : for each and for each , it holds that ( decisive ) is ; ( strongly_decisive ) [ right = of decisive ] is ; ( persistently_decisive ) [ right = of strongly_decisive ] is ; ( fair ) [ right = of persistently_decisive ] is ; ( decisive ) ( strongly_decisive ) ; ( strongly_decisive ) ( persistently_decisive ) ; ( persistently_decisive ) ( fair ) ; ( decisive ) is ; ( strongly_decisive ) [ right = of decisive ] is ; ( persistently_decisive ) [ right = of strongly_decisive ] is ; ( fair ) [ right = of persistently_decisive ] is ; ( decisive ) ( strongly_decisive ) ; ( strongly_decisive ) ( persistently_decisive ) ; ( persistently_decisive ) ( fair ) ; the three missing implications in the above proposition do actually not hold , as witnessed by the following example .we also illustrate the fact that and are incomparable .consider the random walk of example [ example : dmcrandomwalk ] .we have shown in example [ example : fairness ] that is strongly fair .now let us assume that and let us consider the initial distribution , the dirac distribution over .then from example [ example : btildedeci ] , is decisive from w.r.t .any set of states .again in this example , we have observed that it is not strongly decisive w.r.t .any set of the form with .this shows that we do not have , nor and . and since is not decisive from w.r.t . , this also proves that does not imply . in order to illustrate that does not imply in general , we consider the denumerable markov chain of example [ counterexample : fairness ] .we consider and .it is easily observed that is as we start in with probability , but we have shown that is not .if is a dmc , i.e. if is at most denumerable and , we can complete the picture using the following result of .[ lemma : dmcfiniteattractordecisive ] if is a dmc that has a finite attractor , then is decisive w.r.t .any set of states .we can sum up the previous implications as follows : ( attractor ) ; ( decisive ) [ right = of attractor ] is ; ( strongly_decisive ) [ right = of decisive ] is ; ( persistently_decisive ) [ right = of strongly_decisive ] is ; ( fair ) [ below = of strongly_decisive , yshift=.3 cm ] is strongly fair ; ( attractor ) ( decisive ) ; ( decisive ) ( strongly_decisive ) ; ( strongly_decisive ) ( persistently_decisive ) ; ( strongly_decisive ) ( fair ) ;while decisiveness is well - defined for general stss , proving that a given sts is decisive might be technical in general .a standard approach in model - checking to avoid such difficulties is to abstract the system into a simpler one , that can be analyzed and provides guarantees on the concrete system .we thus propose a notion of abstraction , which will help proving properties of general stss . also , through abstractions , we will be able to characterize meaningful attractors .let and be two stss .let be a measurable function .a set is said _-closed _ whenever : for every , if and , then . following , we define the pushforward of as by for every and for every .the role of the pushforward is to transfer the measures from to . is an _-abstraction _ of if from the definitions of , and equivalent measures , the notion of -abstraction equivalently requires that for every and every , intuitively , the two stss have the same `` qualitative '' steps .the notion of -abstraction naturally extends to labelled stss . is an -abstraction of whenever : * is an -abstraction of ; * ; * for every , , ; * for every , .the two last conditions imply that for each , is -closed .moreover , for each , .we now establish several technical results , which explicit how stss are related through an -abstraction .the relationship is only qualitative , in the sense that it only relates positive reachability probabilities , but does not relate almost - sure or lower - bounded probabilities .lemmapushforwarddelta [ lemma : pushforwarddelta ] let be a measurable function .then for every and every , .lemmaiterative [ lemma : iterative ] assume that is an -abstraction of .then , for every , for every , is equivalent to .in other words , the above lemma states that for each and for each , this can even be generalized to cylinders : lemmaequiv [ coro : equiv ] assume that is an -abstraction of .then for every , for every , as an immediate consequence , the positivity of properties with bounded witnesses are preserved through -abstractions : [ coro : until ] assume that is an -abstraction of . then for every , for every : note that this however does not apply to liveness properties , such as with . to ensure that these more involved properties are preserved via abstraction, we will strengthen the assumptions on the abstraction and on the stss .we assume is an -abstraction of .let .the -abstraction is _ -sound _ whenever for every : is a-abstraction of if it is -sound for every .fix .the -abstraction is _ -complete _whenever for every , is a _complete _ -abstraction of if it is -complete for every .sound and complete abstractions will guarantee that , up to , the same properties are satisfied almost - surely in and ( provided some properties are satisfied by and ) .[ example : abstr ] consider again the stss with parameter and with parameters and of examples [ example : dmcrandomwalk ] and [ example : continuous ] .let be the mapping defined as follows : for every and every , .it can be shown that is an -abstraction of . moreover , is sound and complete whenever .when is a dmc , soundness and completeness have a simpler characterization , which will be useful in the proofs .lemmadmc [ lem : dmc ] assume is a dmc .then : * is an -abstraction of iff for every , * is sound iff for every and every , * is complete iff for every and every , the proof of lemma [ lem : dmc ] is postponed to the appendix , page .in this section , we explain how and under which conditions one can transfer interesting decisiveness , attractor and fairness properties of stss through abstractions .[ thm : mudecisiveabstr ] if is a -sound -abstraction of , then for every : in order to prove proposition [ thm : mudecisiveabstr ] , we first show the following technical lemma , which relates avoid - sets in and in . [ lemma : btildeabstr ] let be an -abstraction of . then , for every : fix have the series of equivalences : now from lemma [ lemma : pushforwarddelta ] , one can show that by noticing that .hence iff ( _ i.e. _ ) , which concludes the proof .we are now ready to prove proposition [ thm : mudecisiveabstr ] .fix and assume that is , _i.e. _ to show that is , by lemma [ lemma : btildeabstr ] , it suffices to prove that the latter is immediate by since is -sound .this result obviously extends to stronger decisiveness notions .[ coro : sounddecisive ] if is a sound -abstraction of , then for every : the definitions of attractor and of sound -abstraction yield a similar result : [ lem : attr - via - sound ] if is a sound -abstraction of and if is an attractor for , then is an attractor for . as a direct consequence of lemma [ lemma : dmcfiniteattractordecisive ] and corollary [ coro : sounddecisive ] , we get the following result for denumerable abstractions , which will be crucial for designing approximation algorithms taking advantage of abstractions .[ lemma : toto ] let be a dmc with a finite attractor . if is a sound -abstraction of , then is decisive w.r.t .every -closed set .let us summarize the interesting results on denumerable abstractions .assume is an -abstraction of , and write , the set of -closed sets of .the following implications hold true : ( attractor ) ; ( decisive ) [ right = of attractor ] is ; ( strongly_decisive ) [ right = of decisive ] is ; ( persistently_decisive ) [ right = of strongly_decisive ] is ; ( fair ) [ below = of strongly_decisive , yshift=.3 cm ] is ; ( attractor ) ( decisive ) ; ( decisive ) (strongly_decisive ) ; ( persistently_decisive ) (strongly_decisive ) ; ( strongly_decisive ) ( fair ) ; we established that decisiveness properties could be transferred through sound abstractions .however in the next section , we will also see that soundness of an abstraction can be proved via decisiveness properties .it is therefore relevant to explore alternatives to prove decisiveness properties .we give here two frameworks where this can be done without any assumption on the abstraction .first , we assume a denumerable abstraction , and lower bounds on probabilities of reachability properties .propositionattractorsound [ prop : attractorsound ] let be a dmc such that is an -abstraction of .assume that there is a finite set such that is an attractor for and is an attractor for .assume moreover that for every , for every -closed set in , there exist and such that : * for every , b)\geq p ] and \widetilde{b})_{n \in { \mathbb{n}}} ] the valuation assigning to every and to each other clock , and if , we write for the valuation assigning to every clock . a _ stochastic timed automaton _ ( sta ) is a tuple where : * is a finite set of states ( or locations ) ; * is the initial state ; * is a finite set of clocks ; * is a finite set of edges ; and * for every configuration , is a(n a priori ) continuous distribution over possible delays from , that is , the support of distribution is ; * and for every , is a positive weight .originally , the semantics of an sta was defined as a probability measure on the set of possible runs of the underlying timed automaton : a run in such a timed automaton is an alternating sequence of delay transitions and of discrete transitions .a delay transition is of the form , where is a configuration and , for the configuration . ] and a discrete transition is of the form where is such that , and (\nu ) = \nu' ] for the region to which belongs .we define the abstraction as the projection which associates onto {{\cal a}}) ] , or * for each , .this is a consequence of ( * ? ? ?* lemma f.4 ) which says that from a memoryless region , the future ( and its probability ) is independent of the precise current configuration .this in particular implies that for two configurations , for every -closed set , for every integer , b ) = \prob_{\delta_{\gamma'}}^{\calt_{{\cal a}}}(\f[=k ] b) ] .this implies the expected bounds , by taking .similarly to labelled sts , we consider labelled sta , where each location is labelled by atomic propositions . as consequences of sections [sec : qualitative ] and [ sec : quantitative ] , we get the following decidability and approximability results for reactive sta : [ coro : staresults ] let be a reactive labelled sta , and a dma . then : 1 .we can decide whether satisfies almost - surely ; 2 . for every initial distribution which is numerically amenable w.r.t . is numerically amenable w.r.t . if , given , given and given a sequence of locations and regions , one can approximate up to . ] , we can compute arbitrary approximations of .this is an application of theorem [ theo : titi ] , corollary [ coro : theotiti ] and of sections [ subsec : approx - reach ] and [ subsec : quantmullerabstr ] .it should be noted that all the hypotheses are met : * has a finite attractor : since is a finite mc then so is and we get a trivial finite attractor ; * is decisive w.r.t .any -closed sets .this second point is a little more tricky .first one should realise that since is a reactive , then is also reactive since the condition to be reactive , concerns only the distributions over the delays on each location of the sta and those distributions are not modified from the product with .it should be noted that corresponds to the thick region graph abstraction of since does not influence the behaviour of .then from proposition [ prop : stareactivesound ] , we know that is a sound -abstraction of . since is a finite mc , we get that it is decisive w.r.t .any set of states .we can thus conclude from proposition [ thm : mudecisiveabstr ] .we believe that the proposed approach through abstractions and finite attractors simplifies drastically the proof of decidability of almost - sure model - checking , and in particular avoids the ad - hoc but long and technical proof of ( * ? ? ?* lemma 7.14 ) .furthermore , we obtain interesting approximability results , some of them being consequences of , but the general case of -regular properties ( in particular properties ) being new to this paper . corollary [ coro : staresults ] can be extended to properties expressed as deterministic and complete muller _ timed _ automata ( dcmta ) , which are standard deterministic and complete , and every , there is an edge labelled by that subset which is enabled after time units .so this is complete w.r.t time and actions . ]timed automata with a muller accepting condition . indeed , the product of a reactive sta with such a dcmta is reactive . hence , the whole theory that we have developed applies : the sts of the product has a finite sound abstraction .this allows to express rich properties with timing constraints and evaluate their likelihood in the sta . we will apply a similar reasoning to single - clock sta .we therefore assume that is now a single - clock sta . as in (* section 7.1 ) , we assume the following conditions : a. for all , for all , the function is continuous ; b. if for some , and if for each , then ; c. there is such that for every state with unbounded , , where for each and for each , .these requirements are technical , but they are rather natural and easily satisfiable .for instance , a timed automaton equipped with uniform ( resp .exponential ) distributions on bounded ( resp .unbounded ) intervals satisfy these conditions .if we assume exponential distributions on unbounded intervals , the very last requirement corresponds to the bounded transition rate condition in , required to have reasonable and realistic behaviours . in (* section 7.1 ) , there is no clear attractor property . from the details of the proofs we can nevertheless define where is the region composed of the single null valuation .[ prop : oneclock - attractor ] the set is an attractor for .let .the set of regions for can be chosen as {i-1};c_i [ \mid 1 \le i \le h\} ] assigns distributions to every event ; * assigns to each state a set of events enabled ( or active ) in ; * is the successor function defined for whenever ; each event has an upper ( resp .lower ) bound ( resp . ) on its delay : the duration of event is randomly chosen in the interval ] .[ lemma2 ] for every and , there is such that for all such that there is a path in , for every with , \{(q',\nu ' ) \mid \nu ' \in r'\ } ) > p_2 ] , for every , if and only if . note that the above conditions refine the ones given in subsection [ subsubsec : thickgraph ] using diagonal constraints ( ) , and w.r.t .the granularity as well .we also realize that any region has either only -separated configurations , or only non--separated configurations .we write for the set of equivalence classes , also called regions .we then define the abstraction by projection , and the finite markov chain as follows : * its set of states is ; * there is an edge from to whenever there exists such that ; * from each state , we associate the uniform distribution over .since is just a rescaling of a standard region automaton , we immediately get : is a finite -abstraction of . as previously, we notice that the above abstraction is obviously complete ( since it is finite ) .it is argued in that this abstraction is not always meaningful for having information about the almost - sure satisfaction of properties by .let . as a direct consequence of lemma [ lemma1 ] we get :the set is an finite attractor for . finally , as for stas and using lemma [ lemma1 ] and [ lemma2 ], we also get : is a sound -abstraction of . as consequences , we get the following decidability and approximability results for gsmps : let be a single - ticking labelled gsmp , and be a dma . then : 1 .we can decide whether satisfies almost - surely ; 2 . for every initial distribution which is numerically amenable w.r.t . , is numerically amenable w.r.t . if , given , given and given a sequence of states and refined regions , one can approximate up to . ]we can compute arbitrary approximations of .again , the proof is similar to the ones of corollaries [ coro : staresults ] and [ coro : staoneclockresults ] .we just notice that it is obvious that if is a gsmp with no fixed - delay events , then so is .we believe our approach gives new hints into the approximate quantitative model - checking of gsmps , for which , up to our knowledge , only few results are known .for instance in , the authors approximate the probability of until formulas of the form `` the system reaches a target before time within discrete events , while staying within a set of safe states '' ( resp . ``the system reaches a target while staying within a set of safe states '' ) for gsmps ( resp . a restricted class of gsmps which can be proved to be ) , and study numerical aspects .our approach permits to do the same with any reachability or time - bounded - delay event ) gsmps are obviously almost - surely non - zeno . ] until property on the whole class of single - ticking gsmps .the numerical aspects in our computations can be dealt with as in .we now give an overview of the results presented in this paper . in the interest of space ,not all precise statements are listed . for instance, we omit the results which assume a fixed initial distribution .also , few notations are borrowed from the paper , yet the global picture is almost self - contained .the idea is the following . given an sts and a property , figures [ fig : qualitative ] and [ fig : quantitative ] provide the assumptions should satisfy to be able to perform the qualitative or quantitative analysis of on .note that when we consider abstractions , then we assume .then , figures [ fig : properties ] , [ fig : transfer ] and [ fig : abstraction ] summarize the relationships between the various notions . they should be used to know how to prove the properties that are expected of the model , either directly or via an abstraction ( which needs to be designed ) .( hyp ) ; ( conc ) [ right = of hyp , xshift=.5 cm ] [ cols= " < " , ] ; ( decisive ) ( sound ) node [ midway , below ] prop . [ coro : decsound ] ; background2 ;this paper deals with general stochastic transition systems ( hence possibly continuous state - space markov chains ) .we defined abstract properties of such stochastic processes , which allow one to design general procedures for their qualitative or quantitative analysis .effectivity of the approach requires some effectiveness assumption on specific high - level formalisms that are used to describe the stochastic process .we have demonstrated the effectiveness of the approach on two classes of systems : stochastic timed automata on the one hand , and generalized semi - markov processes on the other hand , can be instantiated in our framework . in both cases ,we recover known results ; but our approach yields further approximability results , which , up to our knowledge , are new .we believe that , more importantly , we provide in this paper a methodology to understand stochastic models from a verification and algorithmics point - of - view .section [ sec : guide ] gives a high - level description of our results , and of properties that should be satisfied by the stochastic model in order to apply our algorithms .in many cases , we showed that the hypotheses were really necessary to get the expected results , by providing counter - examples when the hypotheses are relaxed .as future work , we plan to investigate new applications , such as for instance the real - time stochastic systems generated by stochastic petri nets , or the infinite - state systems appearing in parameterized verification .also , we would like to adopt a similar generic approach for processes with non - determinism like markov decision processes , or even stochastic two - player games .[ app : probpathsequiv ] we have to show that for each , .since the complementary of each cylinder is a finite union of cylinders and since each denumerable unions of cylinders can be written as a denumerable disjoint union of cylinders , it suffices to show this for each cylinder with .we have to show that for each , it should be observed that , by symmetry , it suffices to show one of the implications .first , assume and fix .then from the definition of and and from the hypothesis , we get that : now consider and fix .suppose that , i.e. from the definition : write .we can write which is in from the hypotheses over . from , we can easily check that , which implies that and thus using again the definition , it follows that .now , assume that , fix and assume that .remember that we inductively define : from the hypotheses over , it is easily seen that for each , .let us consider the value . from the definition of , it holds that we thus get that we prove the two following statements : for each , a. and b. where if , will stand for the initial distribution . point ( a ) is here in order to establish that the sets are measurable , and point ( b ) aims at reducing our integrals to sets whose images have positive values .it should be observed that the second point is an immediate consequence of the first point .we thus only need to prove point ( a ) .we do this by induction over .first , if , we show that which will ensure that ( a ) is satisfied .first assume that is such that towards a contradiction , assume that .then it holds that which is the needed contradiction .now assume that .then from the definitions of and of , and from classical properties on integrals , it is straightforward to check that the second inclusion holds .now suppose that point ( a ) holds for each for some , and let us show that it is still true for .as before , it suffices to establish that the first inclusion can be verified just like in the first case .now assume that .we know that using the induction hypothesis over , we get that for each , and since , this induces that which concludes that point ( a ) is satisfied .hence from points ( a ) and ( b ) , we get that since and since , it follows that . from the hypothesis, we thus get that . now observing that we can prove similarly that , we can establish that which concludes the proof .[ app : lemma_integration ] the proof is by induction on .assume that , we have to show : .first , now let us unfold : now fix and assume that for each for each the equality above holds .we will prove that it is still the case for .first , observe that if then the induction hypothesis states that which is what we wanted . otherwise , if , then the hypothesis induction states that then using a similar argument as in the first case , we get that since .this concludes the proof .[ app : product - sigma - algebra ] it suffices to show that a. contains all rectangles ; b. ; and c. is a -algebra .property ( i ) follows from the decomposition any rectangle into elements of : property ( ii ) is straightforward since for every , and thus , the union also belongs to the -algebra .we finally establish property ( iii ) .first is non - empty as .then , for , the complement still belongs to since is a -algebra and hence for each , .similarly , we get that is closed under denumerable unions .[ app : produit - technique ] we will establish a link between distributions over and distributions over . in order to do so ,we introduce some notations . given we write for each , .also given and we inductively define observe that since is deterministic , those states are uniquely defined .we then have the following result .the proof of the above proposition will then be a direct consequence of the next lemma .[ lemma : probproduct ] for each initial distribution for , for each state of , for each and for each , it holds that we prove it by induction over .first if , we have to show that for every , every and every , which is trivial from the definition of .now fix .assume that for each , the above property holds true and show that it is still the case for .let , and .we have that using the induction hypothesis , we get that combining with , we thus obtain that which concludes the proof .[ app - btildemes ] we first prove the first point .remember that given , . observe that we can write : it thus suffices to show that for each , we will use similar arguments as in the proof of lemma [ lemma : equiv ] . remember that if , it holds that .first , if then this set corresponds to the set which is in . now if then which is in from the hypotheses over .now assume that , it hold that we inductively define : from the hypotheses over , it holds that for each . in the sequel, denotes . as in the proof of lemma [ lemma : equiv ] , we can show that firstly , and that for each , a. and b. it follows that now since for each , , it holds that if and only if , i.e. if and only if . andsince , it follows that and thus the second property is a direct consequence of the definition of .we now focus on the third property . towards a contradiction ,assume that there is such that but .it follows that there is such that and thus which is the wanted contradiction .let us show the fourth item .it should be observed that given , . it thus suffices to show that . since , towards a contradiction , we assume that . since it follows that there is and such that from lemma [ lemma : integration ], writing , we get that and from the third property proven previously , we deduce that with which contradicts the second property of this lemma . finally , we prove the last property .it is straightforward by observing that the two events measured in this equality are exactly the same : and . for each , we have that . if , then and thus . otherwise , if , then and thus .this directly implies that .we show this by induction on .case is by definition . fix some and assume that the statement holds true for each . by induction hypothesis , we have that is equivalent to .we want to show that is equivalent to .we first notice that is equivalent to .indeed write and . from the induction hypothesis , we know that and . following a similar argument as in the proof of lemma [ lemma : equiv ] and from the definition of , we can deduce that is equivalent to .so it remains to show that is equivalent to , when .this is by definition of an -abstraction .we do the proof by induction on .the case is obvious from the definition of .now fix and assume that for each , for each and for each , we show that it is still the case for . fix and .we let and .note that we hence assume that .we first realize that . indeed for each , then , applying lemma [ lemma : integration ] , we get : and by definition of an -abstraction , the measures and are equivalent .hence from lemma [ lemma : equiv ] , from the hypothesis of induction , we get that since , we conclude : we still have to consider the case where . in that case , and thus which terminates the proof .[ app : dmc ] we handle the case of soundness . indeed assume that for each and for each , the condition presented in the statement ( second item ) holds true . then fix , and assume that and show that .towards a contradiction , assume that .then , since is a dmc , there is such that and from the hypothesis , it follows that .observe that since , we have that .hence we get a contradiction by noticing : [ app - attractorsound ] fix and .we want to show that is -decisive w.r.t .we therefore have to show that . towards a contradictionwe assume that , i.e. .since is an attractor of , we deduce from lemma [ lemma : attractorgf ] that , hence : we let be the subset of states of such that : due to equation , is non - empty , and furthermore every such belongs to and .we set .in particular , , hence from lemma [ lemma : btildecomplementaire ] ( third item ) we get that for every , . according to hypothesis , for every , we can find and such that for every , \alpha^{-1}(b))\ge p_s.\ ] ] then taking and ( since is finite ) , it holds that for every , \alpha^{-1}(b))\ge p \qquad \text{hence } \qquad \prob_{\nu}^{\calt_1}(\g[\leq k ] \alpha^{-1}(b^c))\leq 1-p .\label{eq : upbound}\ ] ] from , we can deduce that : it remains to show the last inequality .we will prove it by induction as follows .first we introduce some useful notations .we will write for the finite sequence where occurs exactly times , and given we will write for the finite sequence where occurs exactly times . then observe that \alpha^{-1}(b^c ) } \subseteq \\\bigcap_{n\in\in } \bigcup_{(j_0,\ldots , j_n)\in\in^{n+1 } } \cyl(s_1^{j_0 } , a'_1 , b^c_k , s_1^{j_1},a'_1 , b^c_k,\ldots , s_1^{j_n } , a'_1 , b^c_k ) .\end{gathered}\ ] ] we will prove by induction over that for each and for each , first fix and .it holds that for each now fix and assume that for each and for each the inequality holds true .we want to show that it is still satisfied for . for each and for each have that for some , from lemma [ lemma : integration ] .we now write and for each , . then , still from lemma [ lemma : integration ] , we can establish that through the limits , we conclude that \alpha^{-1}(b^c))\leq \lim_{n\to\infty } ( 1-p)^n = 0 ] from the definitions of and .it follows that for each , for each , \alpha^{-1}(b^c))\leq 1-p.\ ] ] it thus holds that for each , \alpha^{-1}(b^c ) ) & = \sum_{i=1}^m \mu'(\alpha^{-1}(s'_i ) ) \cdot\prob_{\mu'_{\alpha^{-1}(s'_i)}}^{\calt_1 } ( \g[\leq k ] \alpha^{-1}(b^c))\notag\\ { } & \leq \sum_{i=1}^m \mu'(\alpha^{-1}(s'_i ) ) \cdot ( 1-p ) = ( 1-p)\cdot\sum_{i=1}^m \mu'(\alpha^{-1}(s'_i))\notag\\ { } & \leq 1-p .\end{aligned}\ ] ] we can then deduce that we can prove this last inequality by induction as follows .first we introduce some useful notations. we will write for the finite sequence where occurs exactly times , and given we will write for the finite sequence where occurs exactly times . then observe that \alpha^{-1}(b^c ) } \subseteq \\\bigcap_{n\in\in } \bigcup_{(j_0,\ldots , j_n)\in\in^{n+1 } } \cyl(s_1^{j_0 } , a''_1 , b^c_k , s_1^{j_1},a''_1 , b^c_k,\ldots , s_1^{j_n } , a''_1 , b^c_k ) .\end{gathered}\ ] ] we will prove by induction over that for each and for each , first fix and .it holds that for each now fix and assume that for each and for each the inequality holds true .we want to show that it is still satisfied for .for each and for each we have that for some , from lemma [ lemma : integration ] .we now write and for each , . then , still from lemma [ lemma : integration ] , we can establish that through the limits , we conclude that \alpha^{-1}(b^c))\leq \lim_{n\to\infty } ( 1-p)^n = 0 ] .towards a contradiction , we suppose that .since \btilde } \cap \ev{\calt}{\f[=m ] \widetilde{\btilde}},\ ] ] we deduce that there are such that \btilde \wedge \f[=m ] \widetilde{\btilde})>0 ]. we can show that b\mid e)=0 ] .indeed we get that : b\mid e ) & = \frac{\prob_{\mu}^{\calt}((\f[\ge n ] b)\wedge e)}{\prob_{\mu}^{\calt}(e)}\notag\\ { } & \leq \frac{\prob_{\mu}^{\calt}(\f[\ge n ] b\wedge \f[=n ] \btilde ) } { \prob_{\mu}^{\calt}(e)}\notag\\ { } & = 0\notag\end{aligned}\ ] ] from the definition of . the equality \btilde\mid e)=0 $ ] is proved similarly .writing , it follows that b\vee \f[\ge q ] \btilde \mid e)=0.\ ] ] and since , this contradicts the fact that is , which concludes the proof .fix and such that .fix . we know that then from lemma [ lemma : probproduct ] , we know that for each as this holds true for each , we thus get that from the hypothesis .this concludes the proof .fix such that for each , .we want to prove that for each , .fix and compute : note that induces a distribution as follows : for each , .writing it then holds that .we then get , from the hypothesis and lemma [ lemma : soundproduct1 ] , that for each . hence , which concludes the proof .we first show that is an -abstraction of .it suffices to show that for each , for each and for each , fix , and .write for the unique label such that . in order to prove, we will use the fact that is an -abstraction of . and in order to make the link with the wanted equivalence , we will use lemma [ lemma : probproduct ] .we can establish that .indeed given and , it holds that hence we get that where the first and third equivalences hold from lemma [ lemma : probproduct ] , and the second equivalence holds from the fact that is an -abstraction of . finally , since is decisive w.r.t for each and since is an -abstraction of , proposition [ coro : decsound ] allows us to conclude that is a sound -abstraction of .[ ex : cex - sound ] we illustrate remark [ rk : produit ] by exhibiting an example where soundness ( w.r.t . a fixed distribution ) as well as decisiveness properties do not transfer to the product with a deterministic muller automaton .consider the dmc depicted on the left of figure [ figure : soundprod ] which corresponds to the random walk over from example [ example : dmcrandomwalk ] , when .consider also the finite mc on the right of the same figure .clearly enough , is an -abstraction of for the mapping defined as follows : , and for any .define as the initial distribution in . for any , it follows that is a -sound -abstraction of .it should be noted that it is however not sound when considering as initial distribution .indeed , though ( and ) .( lzero ) at ( 0,0 ) ; ( lun ) at ( 2.2,0 ) ; ( ldeux ) at ( 4.4,0 ) ; ( lint ) at ( 6.6 , 0 ) ; ( lzero ) to[bend left=30 ] node[ptt , midway , above ] ( lun ) ; ( lun ) to[bend left=30 ] node[ptt , midway , below ] ( lzero ) ; ( lun ) to[bend left=30 ] node[ptt , midway , above ] ( ldeux ) ; ( ldeux ) to[bend left=30 ] node[ptt , midway , below ] ( lun ) ; ( ldeux ) to[bend left=30 ] node[ptt , midway , above ] ( lint ) ; ( lint ) to[bend left=30 ] node[ptt , midway , below ] ( ldeux ) ; ( szero ) at ( 7.8 , 0 ) ; ( sun ) at ( 10 , 0 ) ; ( sdeux ) at ( 12.2 , 0 ) ; ( sdeux.15) .. controls + ( 45:1 ) and + ( 315:1) .. (sdeux.345 ) node[ptt , midway , right ] ; ( szero ) to[bend left=30 ] node[ptt , midway , above ] ( sun ) ; ( sun ) to[bend left=30 ] node[ptt , midway , below ] ( szero ) ; ( sun ) to[bend left=30 ] node[ptt , midway , above ] ( sdeux ) ; ( sdeux ) to[bend left=30 ] node[ptt , midway , below ] ( sun ) ; consider now the muller automaton of section [ sec : prelim ] on the left of figure [ figure : mullerautomaton ] .as stated in lemma [ lemma : alphabar ] , it holds that is an -abstraction of where for each and each , .consider and .it then holds that and that .it is easily observed that starting in state ( resp . ) in ( resp . ) , then if we visit in the future a state ( resp . ) we will necessarily get that . keeping this in mind, one can see that while where the first equality holds from lemma [ lemma : integration ] and the second equality holds from lemma [ lemma : probproduct ] .this proves that is not -sound for . now , observe that is decisive w.r.t .any set of states from as we have seen that for any set of states .it should be noted that is not decisive by considering as the initial distribution and . in this case , and thus .consider now , we have already shown that .it can be established that which are states not reachable from .we deduce that .this shows that is not decisive w.r.t . from .
a decade ago , abdulla , ben henda and mayr introduced the elegant concept of decisiveness for denumerable markov chains . roughly speaking , decisiveness allows one to lift most good properties from finite markov chains to denumerable ones , and therefore to adapt existing verification algorithms to infinite - state models . decisive markov chains however do not encompass stochastic real - time systems , and general stochastic transition systems ( stss for short ) are needed . in this article , we provide a framework to perform both the qualitative and the quantitative analysis of stss . our first contribution is to define various notions of decisiveness ( inherited from ) , notions of fairness and of attractors for stss , and explicit the relationships between them . as a second contribution , we define a notion of abstraction , together with natural concepts of soundness and completeness , and we give general transfer properties , which will be central to several verification algorithms on stss . our third contribution focuses on qualitative model - checking . beyond ( repeated ) reachability properties for which our technics are strongly inspired by , we use abstractions to design algorithms for the qualitative model - checking problem of arbitrary -regular properties , when the sts admits a denumerable ( sound and complete ) abstraction with a finite attractor . our fourth contribution is the design of generic approximation procedures for quantitative model - checking ; in addition to extensions of for general stss , we design approximation algorithms for -regular properties ( once again by means of specific abstractions ) . last , our fifth contribution consists in instantiating our framework with stochastic timed automata ( sta ) and generalized semi - markov processes ( gsmp ) , two models combining dense - time and probabilities . this allows us to derive decidability and approximability results for the verification of these two models . some of these results were known from the literature , but our generic approach permits to view them in a unified framework , and to obtain them with less effort . we also derive interesting new approximability results for sta and gsmps .
suppose we are given sensors , each one sends a function ( e.g. a signal or image ) to a receiver common to all sensors. during transmission each gets convolved with a function ( the may all differ from each other ) .the receiver records the function , given by the sum of all these convolved signals .more precisely , where is additive noise .assume that the receiver knows neither nor .when and under which conditions is it possible to recover all the individual signals and from just one received signal ?blind deconvolution ( when ) by itself is already a hard problem to solve .here we deal with the even more difficult situation of a mixture of blind deconvolution problems .thus we need to correctly blindly deconvolve and demix at the same time .this challenging problem appears in a variety of applications , such as audio processing , image processing , neuroscience , spectroscopy , astronomy .it also arises in wireless communications and is expected to play a central role in connection with the future internet - of - things .common to almost all approaches to tackle this problem is the assumption that we have multiple received signals at our disposal , often at least as many received signals as there are transmitted signals .indeed , many of the existing methods fail if the assumption of multiple received signals is not fulfilled . in this paper, we consider the rather difficult case , where only one received signal is given , as shown in .of course , without further assumptions , this problem is highly underdetermined and not solvable . we will prove that under reasonable and practical conditions , it is indeed possible to recover the transmitted signals and the associated channels in a robust , reliable , and efficient manner from just one single received signal .our theory has important implications for applications , such as the internet - of - things , since it paves the way for an efficient multi - sensor communication strategy with minimal signaling overhead . to provide a glimpse of the kind of results we will prove ,let us assume that each of the lies in a known subspace of dimension , i.e. , there exists matrices of size such that .in addition the matrices need to satisfy a certain `` local '' mutual incoherence condition described in detail in . this condition can be satisfied if the are e.g. gaussian random matrices .we will prove a formal and slightly more general version ( see theorem [ thm : main ] and theorem [ thm : noise ] ) of the following informal theorem .for simplicity for the moment we consider a noisefree scenario , that is , .below and throughout the paper " denotes circular convolution .[ thm : informal ] let and let the be i.i.d . gaussian random matrices .furthermore , assume that the impulse responses have _ maximum delay spread _ , i.e. , for each there holds if .let be a certain `` incoherence parameter '' related to the measurement matrices , defined in .suppose we are given then , as long as the number of measurements satisfies ( where is a numerical constant ) , all ( and thus ) as well as all can be recovered from with high probability by solving a semidefinite program .recovering and is only possible up to a constant , since we can always multiply each with and each with and still get the same result . hence , here and throughout the paper , recovery of the vectors and always means recovery modulo constants .we point out that the emphasis of this paper is on developing a theoretical and algorithmic framework for joint blind deconvolution and blind demixing .a detailed discussion of applications is beyond the scope of this paper .there are several aspects , such as time synchronization , that do play a role in some applications and need further attention .we postpone such details to a forthcoming paper , in which we plan to elaborate on the proposed framework in connection with specific applications .problems of the type or are ubiquitous in many applied scientific disciplines and in applications , see e.g .thus , there is a large body of works to solve different versions of these problems .most of the existing works however require the availability of multiple received signals . and indeed , it is not hard to imagine that for instance an svd - based approach will succeed if ( and must fail if ) .a sparsity - based approach can be found in .however , in this paper we are interested in the case where we have only one single received signal a single snapshot , in the jargon of array processing .hence , there is little overlap between these methods heavily relying on multiple snapshots ( manu of which do not come with any theory ) and the work presented here .the setup in is reminiscent of a single - antenna multi - user spread spectrum communication scenario .there , the matrix represents the spreading matrix assigned to the -th user and models the associated multipath channel .there are numerous papers on blind channel estimation in connection with cdma , including the previously cited articles .our work differs from the existing literature on this topic in several ways : as mentioned before , we do not require that we have multiple received signals , we allow all multipath channels to differ from each other , and do not impose a particular channel model .moreover , we provide a rigorous mathematical theory , instead of just empirical observations .the special case ( one unknown signal and one unknown convolving function ) reduces to the standard blind deconvolution problem , which has been heavily studied in the literature , cf . and the references therein . many of the techniques for `` ordinary '' blind deconvolution do not extend ( at least not in any obvious manner ) to the case .hence , there is essentially no overlap with this work with one notable exception . the pioneering paper has definitely inspired our work and also informed many of the proof techniques used in this paper .hence , our paper can and should be seen as an extension of the `` single - user '' ( ) results in to the multi - user setting ( ) . however , it will not come as a big surprise to the reader familiar with , that there is no simple way to extend the results in to the multi - user setting unless we assume that we have multiple received signals . indeed , as may be obvious from the length of the proofs in our paper , there are substantial differences in the theoretical derivations between this manuscript and . in particular, the sufficient condition for exact recovery in this paper is more complicated since ( ) users are considered and the incoherence " between users need to be introduced properly .moreover , the construction of approximate dual certificate is nontrivial as well ( see section [ s : dual ] ) in the multi - user " scenario .the paper considers the following generalization of apply to as well . ] .assume that we are given signals , the goal is to recover the and from .this setting is somewhat in the spirit of , but it is significantly less challenging , since ( i ) it assumes the same convolution function for each signal and ( ii ) there are as many output signals as we have input signals .non - blind versions of or can be found for instance in . in the very interesting paper , the authors analyze various problems of decomposing a given observation into multiple incoherent components , which can be expressed as here are ( decomposable ) norms that encourage various types of low - complexity structure .however , as mentioned before , there is no `` blind '' component in the problems analyzed in .moreover , while is formally somewhat similar to the semidefinite program that we derive to solve the blind deconvolution - blind demixing problem ( see ) , the dissimilarity of the right - hand sides in and makes all the differences when theoretically analyzing these two problems .the current manuscript can as well be seen as an extension of our work on self - calibration to the multi - sensor case . in this context , we also refer to related ( single - input - single - output ) analysis in . in section [ s : prelim ] we describe in detail the setup and the problem we are solving .we also introduce some notations and key concepts used throughout the manuscript .the main results for the noiseless as well as the noisy case are stated in section [ s : maintheorem ] .numerical experiments can be found in section [ s : numerics ] .section [ s : proofs ] is devoted to the proofs of these results .we conclude in section [ s : conclusion ] and present some auxiliary results in the appendix .before moving to the basic model , we introduce notation which will be used throughout the paper .matrices and vectors are denoted in boldface such as and .the individual entries of a matrix or a vector are denoted in normal font such as or for any matrix , denotes nuclear norm , i.e. , the sum of its singular values ; denotes operator norm , i.e. , its largest singular value , and denotes the frobenius norm , i.e. , .for any vector , denotes its euclidean norm . for both matrices and vectors , and stand for the transpose of and respectively while and denote their complex conjugate transpose . and denote the complex conjugate of and respectively .we equip the matrix space with the inner product defined as a special case is the inner product of two vectors , i.e. , the identity matrix of size is denoted by . for a given vector , represents the diagonal matrix whose diagonal entries are given by the vector . throughout the paper, stands for a constant and is a constant which depends linearly on ( and on no other numbers ) .for the two linear subspaces and defined in and , we denote the projection of on and as and respectively . and are the corresponding projection operators onto and we develop our theory for a more general model than the blind deconvolution / blind demixing model discussed in section [ s : intro ] .our framework also covers certain self - calibration scenarios involving multiple sensors .we consider the following setup where , , , and .we assume that all the matrices and are given , but none of the and are known .note that all and can be of different lengths .we point out that the total number of measurements is given by the length of , i.e. , by .moreover , we let and throughout our presentation .this model includes the blind deconvolution - blind demixing problem as a special case , as we will explain in section [ s : maintheorem ] .but it also includes other cases as well .consider for instance a linear system , where the measurement matrices are not fully known due to lack of calibration and represents the unknown calibration parameters associated with .an important special situation that arises e.g. in array calibration is the case where we only know the direction of the rows of .in other words , the norms of each of the rows of are unknown .if in addition each of the belongs to a known subspace represented by , i.e. , , then we can write such an as .let denote the -th column of and the -th column of .a simple application of linear algebra gives where is the -th entry of one may find an obvious difficulty of this problem as the nonlinear relation between the measurement vectors and the unknowns proceeding with the meanwhile well - established lifting trick , we let and define the _ linear _ mapping for by note that the adjoint operator of is since is equipped with the inner product for any and . can be also written into simple matrix form , i.e. , , which is easily verified by definition .thus we have lifted the _ non - linear vector - valued _equations to _ linear matrix - valued _ equations given by alas , the set of linear equations will be highly underdetermined , unless we make the number of measurements very large , which may not be desirable or feasible in practice .moreover , finding such rank-1 matrices satisfying is generally an np - hard problem .hence , to combat this underdeterminedness , we attempt to recover by solving the following nuclear norm minimization problem , if the solutions ( or the minimizers to ) are all rank - one , we can easily extract and from via a simple matrix factorization . in case of noisy data , the will not be exactly rank - one , in which case we set and to be the left and right singular vector respectively , associated with the largest singular value of .naturally , the question arises if and when the solution to coincides with the true solution .it is the main purpose of this paper to shed light on this question .analogous to matrix completion , where one needs to impose certain incoherence conditions on the singular vectors ( see e.g. ) , we introduce two quantities that describe a notion of incoherence of the matrices .we require and define implies that and .in particular , if each is a partial dft matrix then .the quantity will be useful to establish theorem [ thm : noise ] , while the main purpose of introducing is to quantify a `` joint incoherence pattern '' on all .namely , there is a _ common _partition of the index set with and such that for each pair of with and , we have which says that each does not deviate too much from . the key question here is whether such a _ common _ partition exists .it is hard to answer it in general . to the best of our knowledge, it is known that for each , there exists a partition ( where depends on ) such that if where this argument is shown to be true in by using theorem 1.2 in .based on this observation , at least we have following several special cases which satisfy for a common partition . 1 .all are the same .then the common partition can be chosen the same as for any particular 2 .if each is a submatrix of , then we can simply let such that holds .if all are `` low - frequency '' dft matrices , i.e. , the first columns of an dft matrix with , we can actually create an _explicit _ partition of such that for example , suppose and , we can achieve and by letting . a short proof will be provided in section [ sub : fourier ] .some direct implications of are where .now let us introduce the second incoherence quantity , which is also crucial in the proof of theorem [ thm : main ] , the range of is given in proposition [ prop : muh ] .[ remark_lemma ] the attentive reader may have noticed that the definition of is a bit more intricate than the one in , where only depends on latexmath:[ ] from and the fact that is real . taking the sum of and over gives and \cdot \|{\boldsymbol{t}}_p\| \\ & \leq & \frac{32l^{2}}{9q^{2 } } \cdot \frac{{\mu_{\max}}^2k}{l } \cdot \frac{5q}{4l}\\ & \leq & \frac{40 { \mu_{\max}}^2k}{9q}.\end{aligned}\ ] ] thus the variance is bounded above by and for some constant .then we just use to estimate the deviation of from by choosing .setting gives us where and are properly assumed to be smaller than in particular , we take and have with the probability at least in this section , we aim to show that , where is defined in , i.e. , the second condition in holds with high probability .the main idea here is first to show that a more general and stronger version of incoherent property , holds with high probability for any and . since the derivation is essentially the same for all different pairs of with , without loss of generality , we take and as an example throughout this section .we finish the proof by taking the union bound over all possible sets of following the same procedures as the previous section , we have explicit expressions for and , where , , and are defined in except the notation , where we omit subscript in the previous section . by combining and , we arrive at .\ ] ] note that the expectations of all terms are equal to because is independent of and both and have zero mean .define as and there holds each can be treated as a matrix because it is a linear operator from to .[ prop : mix ] under the assumption of and and that are standard gaussian random vectors of length holds with probability at least if where and by setting , we immediately have , which is written into the following corollary .[ for : mix ] under the assumption of and and that are standard gaussian random vectors of length holds with probability at least if where and in other words , the proof of proposition [ prop : mix ] follows two steps .first we will show each holds with high probability , followed by taking the union bound over all and . for any fixed set of with , it has been shown , in lemma [ lem : cm1-mix ] , that with probability at least if .then we simply take the union bound over all and and it leads to if where there are at most events and . in order to make the probability of success at least , we can just choose , or equivalently , .[ lem : cm1-mix ] under the assumptions of , and and that independently for and , there holds with probability if .we only prove the bound for , the proofs of the bounds for , and use similar arguments and are left to the reader . following from the definition in , and . by using lemma [ lem:6jb ] and [ lemma : psi ] , where follows from lemma [ lemma : psi ] .we proceed to estimate by first finding and have the following forms : and the expectations of and are and taking the sum over leads to and thus the variance is bounded by then we just apply to estimate the deviation of from by choosing .letting gives us with probability at least where and are properly assumed to be smaller than let and , with the probability at least in this section , we will construct a such that holds simultaneously for all .if such a exists , then solving yields exact recovery according to lemma [ lemma : suffcond ] .the difficulty of this mission is obvious since we require all to be close to and small " on .however , it becomes possible with help of the incoherence between and .the method to achieve this goal is to apply a well - known and widely used technique called _ golfing scheme _ , developed by gross in .the approximate dual certificate satisfying lemma [ lemma : suffcond ] is constructed via a sequence of random matrices , by following the philosophy of golfing scheme . the constructed sequence would approach on exponentially fast while keeping small " on at the same time .[ [ construct - an - approximate - dual - certificate - via - golfing - scheme ] ] construct an approximate dual certificate via golfing scheme + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1 .initialize for all and 2 . for from to ( where will be specified later in lemma [ lem : y - hx ] ), we define the following recursive formula : 3 . denotes the result after -th iteration and let , i.e. , the final outcome for each .denote as the difference between and on , i.e. , and can be rewritten into the following form : moreover , yields the following relation : from and .an important observation here is that each is an _ unbiased _ estimator of , i.e. , where for all due to the independence between and and .remember that are independent of , which is based on the construction of sequences in and .more precisely , the expectation above should be treated as the conditional expectation of given are known .[ lem : y - hx ] conditioned on and , the golfing scheme and generate a sequence of such that hold simultaneously for all in particular , if , where . in other words , the first condition in holds . directly following from leads to that and thus . by applying and , it is easy to see that recall that for all and by the induction above , we prove that in the previous section , we have already shown that approaches exponentially fast with respect to .the only missing piece of the proof is to show that is bounded by for all , i.e. , the second condition in holds . without loss of generality , we set .following directly from and , there holds where .it suffices to demonstrate that for in order to justify since before moving to the proof , we first define the quantity which will be useful in the proof , in particular , because of and the definition of in .also we define as and there holds the definition of is a little complicated but the idea is simple .since we have already shown in lemma [ lem : y - hx ] that is very close to for large , can be viewed as a measure of the incoherence between ( an approximation of ) and .we would like to have small " , i.e. , which guarantees that concentrates well around for all and this insight leads us to the following lemma . [ lem : normbound ] let be defined in and satisfy if , then simultaneously for with probability at least .thus , the second condition in , holds simultaneously for all .the assumption is justified in lemma [ lem : mup - half ] . without loss of generality , we start with it is shown in that first we rewrite into the sum of rank-1 matrices with mean by and , .\ ] ] denote by where is defined in .the goal is to bound the operator norm of , i.e , , by .an important fact here is that is independent of all with because is a function of following from and the assumption , we have the proof is more or less a routine : estimate , and apply .for any fixed .\\\end{aligned}\ ] ] note that for , and .there holds follow from , and lemma [ lemma : psi ] .taking the sum over , from to , gives thus we have now let s move on to the estimation of . from , we have the corresponding and have quite complicated expressions. however , all the cross terms have zero expectation , which simplifies and a lot . which follows from . which follows from and \\ & \leq & \frac{2{\mu_{\max}}^2k_1}{l } \sum_{j=1}^r \sum_{l\in\gamma_p } { \text{tr}}({\boldsymbol{w}}_{j , p-1}^*{\boldsymbol{s}}_{j , p } { \boldsymbol{b}}_{j , l}{\boldsymbol{b}}_{j , l}^*{\boldsymbol{s}}_{j , p } { \boldsymbol{w}}_{j , p-1 } ) \\ & \leq & \frac{2{\mu_{\max}}^2k_1}{l } \sum_{j=1}^r \|{\boldsymbol{w}}_{j,{p-1}}{\boldsymbol{w}}_{j , p-1}^*\|_*\|{\boldsymbol{s}}_{j , p}\| \\ & \leq & \frac{2{\mu_{\max}}^2k_1}{l } \frac{4l}{3q } \sum_{j=1}^r\|{\boldsymbol{w}}_{j,{p-1}}\|_f^2 \\ & \leq & c\frac{4^{-p+1}r{\mu_{\max}}^2k_1}{q}\end{aligned}\ ] ] where the last inequality follows from and is the dual norm of . \right\| \\ & \leq & \max_{j ,l } \|{\boldsymbol{w}}_{j , l}\|^2 \cdot \left\| \sum_{l\in\gamma_p } \left [ rn_1{\boldsymbol{b}}_{1,l}{\boldsymbol{b}}_{1,l}^ * + { \boldsymbol{b}}_{1,l}{\boldsymbol{b}}_{1,l}^*\right ] \right\| \\ & \leq & \frac{\mu^2_{p-1 } l}{q^2 } \cdot 2rn_1 \|{\boldsymbol{t}}_{1,p}\| = \frac{5r\mu^2_{p-1}n_1}{2q } \\ & \leq & c\frac{4^{-p+1}r\mu^2_h n_1}{q}\end{aligned}\ ] ] where .finally we have an upper bound of : by using bernstein inequality with and , we have in order to let hold with probability at least , it suffices to let this finishes the proof for case when then we take the union bound over all and , i.e. , totally events and then holds simultaneously for all with probability at least . to compensate the loss of probability from the union bound , we can choose , which gives .recall that is defined in as .the goal is to show that and thus hold with high probability .[ lem : mup - half ] under the assumption of , and and that independently for then with probability at least if . in order to show that , it is equivalent to prove for all and . from now on ,we set and fix and show that holds with high probability .then taking the union bound over completes the proof . following from and , there holds obviously , follows directly from the following two inequalities , [ [ step-1-proof - of - boldsymbolb_1lboldsymbols_1p1pi_1-leq - fracsqrtlmu_p-14q ] ] step 1 : proof of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for a fixed , \end{aligned}\ ] ] where has an explicit form in .define and there holds our goal now is to bound both and by .first we take a look at . for each , which follows from and in .the expectation of and can be easily computed , \\ & = & ( n_1 + 1 ) |{\boldsymbol{b}}_{1,l}^*{\boldsymbol{s}}_{1,p+1}{\boldsymbol{h}}_1|^2 |{\boldsymbol{h}}_1^*{\boldsymbol{b}}_{1,k}|^2 \|{\boldsymbol{w}}_{1,k}\|^2 , \\\operatorname{\mathbb e}({\boldsymbol{z}}_k{\boldsymbol{z}}_k^ * ) & = & |{\boldsymbol{b}}_{1,l}^*{\boldsymbol{s}}_{1,p+1}{\boldsymbol{h}}_1|^2 |{\boldsymbol{h}}_1^*{\boldsymbol{b}}_{1,k}|^2 \operatorname{\mathbb e}[({\boldsymbol{a}}_{1,k}{\boldsymbol{a}}_{1,k}^*- { \boldsymbol{i}}_{n_1 } ) { \boldsymbol{w}}_{1,k}{\boldsymbol{w}}_{1,k}^*({\boldsymbol{a}}_{1,k}{\boldsymbol{a}}_{1,k}^*- { \boldsymbol{i}}_{n_1 } ) ] \\ & = & |{\boldsymbol{b}}_{1,l}^*{\boldsymbol{s}}_{1,p+1}{\boldsymbol{h}}_1|^2 |{\boldsymbol{h}}_1^*{\boldsymbol{b}}_{1,k}|^2 ( \|{\boldsymbol{w}}_{1,k}\|^2 { \boldsymbol{i}}_{n_1 } + \bar{{\boldsymbol{w}}}_{1,k}\bar{{\boldsymbol{w}}}_{1,k}^*),\end{aligned}\ ] ] which follows from and . the estimation of is quite similar to that of andthus we give the result directly without going to the details , therefore , and similarly , we have then we just apply with and to estimate , note that in and thus it suffices to let to ensure that holds with probability at least concerning in , we first estimate : where in .thus furthermore , \\ & = & |{\boldsymbol{b}}_{1,l}^*{\boldsymbol{s}}_{1,p+1 } ( { \boldsymbol{i}}_{k_1 } - { \boldsymbol{h}}_1{\boldsymbol{h}}_1^*){\boldsymbol{b}}_{1,k}|^2 { \boldsymbol{w}}_{1,k}^*({\boldsymbol{i}}_{n_1 } + { \boldsymbol{x}}_1{\boldsymbol{x}}_1^*){\boldsymbol{w}}_{1,k } \end{aligned}\ ] ] which follows from .the variance is bounded by similar to what we have done in , note that and thus guarantees that holds with probability at least combining and gives if [ [ step-2-proof - of - boldsymbolb_1lboldsymbols_1p1pi_2-leq - fracsqrtlmu_p-14q ] ] step 2 : proof of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for any fixed , now we rewrite into where by the triangle inequality , .\ ] ] in order to bound by , it suffices to prove that for all , for , where follows from lemma [ lemma : psi ] .now we move on to the estimation of \\ & = & n_1\sum_{k\in\gamma_p } | { \boldsymbol{b}}_{1,l}^*{\boldsymbol{s}}_{1,p+1 } { \boldsymbol{h}}_1|^2|{\boldsymbol{h}}_1^ * { \boldsymbol{b}}_{1,k } |^2 | \|{\boldsymbol{w}}_{j , k}\|^2 \\ & \leq & n_1 \frac{l\mu^2_h}{q^2 } \max_{k\in\gamma_p } \|{\boldsymbol{w}}_{j , k}\|^2 \|{\boldsymbol{t}}_{1,p}\| \\ & \leq & \frac{5\mu^2_h n_1 \max_{k\in\gamma_p } \|{\boldsymbol{w}}_{j , k}\|^2}{4q}\end{aligned}\ ] ] and similarly , thus . by applying bernstein inequality, we have where choosing leads to with probability at least for a fixed for defined in and fixed , where follows from lemma [ lemma : psi ] .now we proceed to compute the variance by then we apply bernstein inequality to get an upper bound of for fixed , with probability if . thus combined with , we have proven that for fixed , holds with probability at least by taking union bound over and using , we can conclude that with probability if .[ [ final - step - proof - of ] ] final step : proof of + + + + + + + + + + + + + + + + + + + + + to sum up , we have already shown that for fixed and , \leq \frac{1}{2}\mu_{p-1}\ ] ] with probability at least if .then we take union bound over all and and and obtain if we choose a slightly larger as , i.e. , , then holds for all with probability at least we now assemble the various intermediate and auxiliary results to establish theorem [ thm : main ] .we recall that theorem [ thm : main ] follows immediately from lemma [ lemma : suffcond ] , which in turn hinges on the validity of the conditions and .let us focus on condition first , i.e. , we need to show that under the assumptions of theorem [ thm : main ] , proposition [ prop : lcisop ] ensures that condition holds with probability at least if where and moving on to the incoherence condition , proposition [ prop : mix ] implies that this condition holds with probability at least if .furthermore , in condition is bounded by with probability according to lemma 1 in .we now turn our attention to condition . under the assumption that properties and hold, lemma [ lem : y - hx ] implies the first part of condition .the two properties and have been established in propositions [ prop : lcisop ] and [ prop : mix ] , respectively .the second part of the approximate dual certificate condition in is established in lemma [ lem : normbound ] with the aid of lemma [ lem : mup - half ] , with probability at least if . by `` summing up '' all the probabilities of failure in each substep , if . since and is chosen to be greater than , it suffices to let yield : with thus , the sufficient conditions stated in lemma [ lemma : suffcond ] are fulfilled with probability at least , hence theorem [ thm : main ] follows now directly from lemma [ lemma : suffcond ] .since we do not assume are of the same size , notation will be an issue during the discussion .we introduce a few notations in order to make the derivations easier .recall is actually a linear mapping from to this linear operator can be easily written into matrix form : define ] .}= \begin{cases } { \boldsymbol{e}}_{ij } + { \boldsymbol{e}}_{ji } & i\neq j \\ { \boldsymbol{i}}_n + { \boldsymbol{e}}_{ii } & i = j \end{cases}\ ] ] where is an matrix with the -th entry equal to 1 and the others being the expectation of and }- { \boldsymbol{q}}{\boldsymbol{q}}^ * = \|{\boldsymbol{q}}\|^2 { \boldsymbol{i}}_n + { \boldsymbol{q}}{\boldsymbol{q}}^ * + \bar{{\boldsymbol{q}}}\bar{{\boldsymbol{q}}}^ * - { \boldsymbol{q}}{\boldsymbol{q}}^ * = \|{\boldsymbol{q}}\|^2 { \boldsymbol{i}}_n + \bar{{\boldsymbol{q}}}\bar{{\boldsymbol{q}}}^*\ ] ] where is the complex conjugate of .suppose that is a `` low - frequency '' fourier matrix , i.e. , where and with .assume there exists a such that with .we choose with such that , and they are mutually disjoint .let be the matrix by choosing its rows from those of with indices in .then we can rewrite as and it actually equals therefore where is the first columns of a dft matrix with . there holds where is the -th column of a. ahmed , a. cosse , and l. demanet . a convex approach to blind deconvolution with diverse inputs . in _ computational advances in multi - sensor adaptive processing ( camsap ) , 2015ieee 6th international workshop on _ ,pages 58 .ieee , 2015 .b. friedlander and a. j. weiss .self - calibration for high - resolution array processing . in s.haykin , editor , _ advances in spectrum analysis and array processing , vol .ii _ , chapter 10 , pages 349413 .prentice - hall , 1991 .j. liu , j. xin , y. qi , f .-zheng , et al . a time domain algorithm for blind separation of convolutive sound mixtures and constrained minimization of cross correlations ., 7(1):109128 , 2009 .r. vershynin .introduction to the non - asymptotic analysis of random matrices . in y. c. eldar and g. kutyniok , editors , _ compressed sensing : theory and applications _ , chapter 5 .cambridge university press , 2012 .
suppose that we have sensors and each one intends to send a function ( e.g. a signal or an image ) to a receiver common to all sensors . during transmission , each gets convolved with a function . the receiver records the function , given by the sum of all these convolved signals . when and under which conditions is it possible to recover the individual signals and the blurring functions from just one received signal ? this challenging problem , which intertwines blind deconvolution with blind demixing , appears in a variety of applications , such as audio processing , image processing , neuroscience , spectroscopy , and astronomy . it is also expected to play a central role in connection with the future internet - of - things . we will prove that under reasonable and practical assumptions , it is possible to solve this otherwise highly ill - posed problem and recover the transmitted functions and the impulse responses in a robust , reliable , and efficient manner from just one single received function by solving a semidefinite program . we derive explicit bounds on the number of measurements needed for successful recovery and prove that our method is robust in presence of noise . our theory is actually a bit pessimistic , since numerical experiments demonstrate that , quite remarkably , recovery is still possible if the number of measurements is close to the number of degrees of freedom . * keywords * blind deconvolution , demixing , semidefinite programming , nuclear norm minimization , channel estimation , low - rank matrix .
the overall security infrastructure of the eu datagrid ( edg) project has been described elsewhere . in brief , it consists of certificate authorities ( ca ) granting x.509 cryptographic certificates to users and hosts , and providing authentication ; and virtual organisations ( vo ) which authorize users to use resources allocated to the vo .authorization information is currently published as vo membership lists using ldap , but voms , a system based on attribute certificates has been developed and described .this paper describes ways in which this authorization information is used to control access to local resources , such as unix accounts , disk filesystems and the virtual filesystems exported by file and web servers .the remote job execution middleware used by edg is based on the globus gatekeeper , and this uses a static mapping from grid identities , based on x.509 certificates , to local unix user accounts .a text file lists grid identities and corresponding local accounts and when a job is received for execution , it is forked as a process owned by the local unix user account .whilst adaquate for small virtual organisations , this procedure is insufficient for vos with hundreds of users at tens of sites . to address the maintainence of the list of acceptable grid identities, edg has developed software to allow vos to publish membership lists and for sites to construct the mapping text file .however , this still leaves the creation and management of the local unix accounts themselves . to address this for edg, we have developed a system of dynamically allocated pool accounts , which are created by the site administrator and then allocated to users as new job or file server requests are received . a directory of lock filesis maintained to retain a one - to - one mapping between grid identities and allocated user accounts .this also ensures that if two requests overlap , they are assigned to the same unix user account , which allows sharing of files between multipart jobs , and between jobs and fileserver requests .since unix operating systems implement filesystem permissions in terms of unix accounts , this also provides a rudimentary way of preventing jobs from different grid users from interfering with each other .accounts may be returned to the pool of unused accounts once all jobs running as that account have terminated and after leaving a grace period for file retrieval . since the account allocation book keeping is maintained by lock files , this is straightforward to implement as a unix shell script which is run periodically and which can be tailored to individual site requirements .to describe fine - grained access control of files and other file - like resources , edg has developed gacl , a format for access control lists , written in xml and in terms of grid identities or virtual organisation membership .each gacl access control list is divided into one or more entries , each of which has a set of permissions which are granted if that entry s credential requirements are met .permissions are to read ( to read files ) , list ( to obtain directory listings ) , write ( to create or write to files , to create directories , or to delete files or directories ) and admin ( to modify access control lists . )an entry may have one or more credentials which must be present , including x.509 certificate identities , vo groups or voms attribute certificates .two generic credentials , authuser ( any user with a valid certificate ) and anyuser ( any user irrespective of credentials ) , allow access to be granted to users with no affiliation to the site . an api and library are provided for manipulating gacl lists , and this is the foundation of the filesystem and fileserver access control described in the remainder of this paper , and of the eu datagrid storage element described elsewhere .we have paid particular attention to applying the gacl access control to standard local filesystem operations , using the slashgrid framework described here .most applications use a filesystem interface to access local files on the same machine .this organises data into files , contained in a hierarchy of folders or directories , each accessible by name . for interactive use ,a graphical file browser is commonly used , displaying files as icons which may be opened and accessed using a mouse .file access within an applications uses an analogous programming interface , which in most programming languages is based on a set of functions to ` open ' , ` read ' , ` write ' etc . the security associated with these operations is traditionally tied to credentials which only have meaning on the machine ( or in some cases the computing site or cluster ) in question .typically , this takes the form of a short username or group name , and a specific file may have one user who has permission to write to that file . for edg testbed sites , these are dynamically allocated pool accounts , but there may be static accounts in other parts of the system , such as user - interface hosts .as we connect machines and sites together with grid technology , these local credentials become increasingly inappropriate for managing authorisation to use resources , as they can not readily be shared across the grid .for example , a user may have the username mcnab at one site , but amcnab at another , and at a third site user mcnab may be a completely different individual .although the pool accounts system described above automates the management of local accounts , the system can not readily be used when creating long lived files , since the username they are owned by is only temporarily associated with a specific grid identity . initially to resolve this shortcoming ,we have produced a file system framework , slashgrid , which allows file and directory authorisation to depend on long - lived grid identities .slashgrid creates a hierarchy of directories under /grid where an application s username , whether static or temporary , is irrelevant to whether it can create , read or modify files : what matters are the grid credentials the application currently holds on behalf of the user , wherever they are on the grid . for interoperability with other products of the eu datagrid and related projects, slashgrid uses the gacl library and access control lists stored in per - directory or per - file control files .slashgrid has also been designed to be readily extensible , by the use of third - party plugins to add additional filesystem types . in particular , we have implemented an http / https filesystem , in which the contents of remote websites can be accessed by applications as if they were local files , and in the case of https , may prove the user s identity to remote servers to obtain access to restricted files .this has the potential to allow existing applications to operate on the grid , indifferent to the true location of the files they manipulate , with remote grid file access provided as a service by the operating system layer .web browsers represent the most common , familiar and most widely installed application used to access remote resources on the current internet .however , most websites are built using http technology , which can only implement cumbersome authentication and authorisation mechanisms .typically , this involves the user choosing a short memorable password for each site to which they need to identify themselves .consequently , the user may find themselves having to enter multiple usernames and passwords as they pass between websites run by their employer , their bank , online merchants etc . as well asthe inconvenience involved , this is also vulnerable to `` brute force '' attacks by third parties due to the short length of the passwords . since the mid-1990 s, most web browsers have also supported the https protocol , which uses x.509 digital certificates and has been widely used to provide authentication of websites to users .this allows a user to send credit card details to a merchant s website , for instance , with some confidence that the site is not being impersonated by a malicious third party .although the corresponding user authentication to websites has been supported since the adoption of https , it has been far less used , due to the administrative overhead and cost of verifying users identity before giving them a meaningful x.509 user certificate . however , with the large - scale deployment of x.509 certificates to members edg and other grid projects across the world , this is changing , and it is now practical to base a high energy physics collaboration s website on https rather than http technology , without requiring users to install any special software. the gridpp project , which represents the uk involvement in edg , has chosen to implement its collaboration website in this way , and to produce a general website management tool , gridsite , which is flexible enough for other projects to use for their own sites .since gridsite is able to uniquely and securely identify users by their x.509 certificate , they can be granted rights to edit and upload webpages , images and binary files .this is enforced using the gacl access control lists described above .access control can be specified in terms of individuals or virtual organisation groups , with membership managed by the group s administrators through the same web interface .this has allowed gridpp to devolve maintenance of the website down to the level of those directly involved in each area of work .since the administration of group authorisation is also devolved , the administrative overhead normally carried by the website manager is greatly reduced .since gridsite permits several users to maintain a set of documents , this has also made collaboration between gridpp members at different institutions considerably easier ; and tools are provided to retain old versions and record document histories to automate the book - keeping of who has changed a document and at what date .initially , gridsite has been implemented as a self - contained executable run from the web server to handle each http or https request .it has now been divided into standalone executables to handle interactive management of the site and groups by administrators , and a loadable module which is dynamically linked directly into the apache webserver used . by incorporating gridsite and gacl technology directly in the webserver , all technologies support by the webserver , including static file serving , anddynamic content provided by cgi scipts , php , asp or jsp server - parsed pages , for example , can be subject to grid - based access control .this flexibility allows a gridsite server to simultaneously operate as an efficient file server , as a web host with dynamic content and as a grid host with grid services in java and other languages operating in their favoured environments .future developments will include porting of the slashgrid filesystems framework and the gridsite server management system to platforms other than linux .support for additional authorization credentials , such as globus cas , will be added to the gacl library , along with support for access control languages recommended by the ggf authorization working group .
the eu datagrid has deployed a grid testbed at approximately 20 sites across europe , with several hundred registered users . this paper describes authorisation systems produced by gridpp and currently used on the eu datagrid testbed , including local unix pool accounts and fine - grained access control with access control lists and grid - aware filesystems , fileservers and web developement environments .
recently , robots are used in order to complete tasks instead of humans in the place where a human can not enter .some of them are full autonomous , but even in the case of an autonomous robot , teleoperation is often necessary when the robot is applied to real missions , for example surveillance , search and rescue . generally , in the human - robot interface for the teleoperation , images from a camera mounted on the robot are displayed . however ,if the camera is mounted on the front of the robot and its axis is fixed to the forward direction , it is difficult for an operator to recognize the width of the robot from the image and understand the relation between the robot and the environment around the robot .therefore , the operator has heavy workloads and a beginner can not remote control the robot well .in addition , in the teleoperation interface , when the communication is unstable , bad - quality images may be displayed and the images may have the delays , because the information volume of an image is large .when a robot runs on rough terrain , the images from the mounted camera on it are not stable because of robot attitude changes .it is very hard for the operator to watch these nonsteady vibrately images for remote control. therefore we should develop an interface to reduce the operator s workloads .to solve these drawbacks , some interfaces for teleoperation of a robot are proposed .yanco et al .analyze the real teleoperation tasks and find situation awareness is important to reduce workload of an operator . to improve situation awareness in the teleoperation , yanco et al .recommends providing the more spatial information to the operator . to display the robot and its surroundings , shiroma et al . attach a long pole on the robot whose top end has a camera .murphy et al .take vision supports from other robots cameras where the operated robot is on the image from a camera mounted on other robots .however these interfaces have disadvantages that the robot s size is increased or the mission often became slow because the operator should control two or more robots .nguyen et al . and saitoh et al . proposed the interface in which a virtual 3d environment including a cg model of the robot is displayed . andnielsen et al . proposed the interface in which a cg model of the robot and an image from the camera mounted on the robot are displayed on the environment map .however , these methods need 3d modeling or map building of the environment and it often takes much time to generate them . therefore , these systems are difficult to implement to robots in the disaster area where the robot has error of self localization and the communication is unstable . to develop the interface which is robust against bad communication conditions and does not need heavy computational power and high cost sensors , matsuno et al. proposed an original idea of the teleoperation system using past image records ( spir ) , which is effective to an unknown environment .this system uses an image captured at a past time as a background image ( fig .[ i m ] ) and overlays a cg model of the robot on it at the corresponding current position and orientation . in this way, the system virtually displays the current robot situation and its surroundings from the past point of view of the camera mounted on the robot ( fig .[ overview ] ) .the system can generate a birds - eye view image which includes the teleoperated robot and its surroundings , and real time vibrations of the image is eliminated by using a fixed background image captured in the past time .the algorithm of spir is following : + + ( step 1 ) get robot s position and orientation + the system gets current robot s position and orientation based on odometry , gps , slam algorithm and so on . in spir, the error of the position is canceled when the background image is switched , because the position of the virtual robot ( cg model ) is depends on relative positions between current robot position and past robot position that the background image was captured .+ + ( step 2 ) save the image and its position + the system stores images from the camera mounted on the robot on the buffer ( temporary memory ) as candidates of the background image . at this time, the system also stores the position of the robot as the camera position where the image is captured .the set of the stored image and position is called `` past image record . ''+ + ( step 3 ) select the background image + the system selects an optimal image as a background . in this study , the system switches the background image if the distance between the current robot position and the past robot position where the background image was taken is larger than a threshold value .+ + ( step 4)generate birds - eye view scene + the system generates a birds - eye view scene by overlaying a cg model of the robot at the corresponding current position and orientation on the background image selected in step(3 ) .+ + by iterating above algorithm , the system displays the birds - eye view image to the operator ( fig .[ overview ] ) .detail descriptions of the implementation of spir are reported in existing papers . in this study , we focus on operability in the case of the narrow communication bandwidth . because spir uses discrete images , the load of transmissionis reduced compared with traditional system which the operator controls the robot by using the images sending in the real time from the robot . however ,if the communication bandwidth is narrow , stored images in the database of the candidate of the background image may be too few .therefore , the system may provide a feeling of strangeness for the operator in the case of the narrow communication band , because the size of the cg model is changed significantly when the background image is switched .the longer the distance from the position where the background image is captured is , the smaller the size of cg model is .if the candidates of the background image are few , the distance may be long . in this study , we proposed zoom function to overcome this problem .moreover , in the case of the narrow communication band , transmission delay will occur . in this case , since the current position data of the robot also have delay , the position data of the cg model on the background image is not correct .therefore it is difficult for the operator to teleoperate the robot , because the operator misses the current position of the robot . in this study , we propose additional interpolation lines that the operator can predict the robot position easily . and the operator can use them as indicators for generating driving control commands of the robot . in the section 2 , we proposed zoom function and additional interpolation lines in spir for improving the robustness against the communication delay . in the section 3 ,we show evaluation experiments in the outdoor environments .the section 4 is conclusion .when the background image is changed , the relative relation between the current position of the robot and the past viewpoint where the selected background image was captured is discretely changed . if the distance of the past viewpoints before and after the background image is updated is small , the system makes a frequent small change of the size of the cg model of the robot on the monitor when the background image is switched , as shown in the upper images of fig .[ fig : zoom ] . in the case of the narrow communication band , because many images can not be sent , the relative distance of two positions may be large. the big change of the size of the cg model of the robot in the background image as shown in the middle images of fig .[ fig : zoom ] is a source of the sense of incongruity . to overcome this problem , a zoom function is installed in spir in order to keep the size of cg model on the background image for every sampling time , as shown in the bottom images of fig .[ fig : zoom ] .we define a vertical angle of the field of view ( fov ) as a parameter of a zooming ratio .we explain about the calculation method of the vertical angle to keep the ratio of cg models on the displayed background images . as shown in fig .[ fig : calcangle ] , is the distance between the viewpoint of the background image and the current position of the robot , is the vertical angle of fov of the background image , is the constant height of the robot , and is the vertical length of fov positioned away from the viewpoint . the system should keep to be a constant value for every sampling , because if this ratio is kept , the size of the cg model on the background image is also not changed regardless of the motion of the robot . from the geometric relationship , we obtain in the case that the background image does not change ( in fig . [fig : zoom ] ) , the distance changes continuously , then changes continuously based on eq.([thetazoom])to keep . in the case that the background image is updated ( in fig .[ fig : zoom ] ) , the distance changes discretely , then changes discretely to keep . by changing the angle of the field of view to keep constant according to the eq .( [ thetazoom ] ) , even if the background image is switched , the size of the cg model on the background image is not changed . in proposed system, we used the opengl and opencl functions for image mapping and image extraction . if the transmission delay occurs , the current position data of the robot also has delay and the position of the cg model on the background image in spir is not correct .therefore it is difficult for the operator to understand the current position of the robot and control the robot . to overcome this problem, we add lines on the displayed image generated by spir . by displaying additional lines, the operator can easily predict the coming position of the robot and control it . in this research , we introduce two types of additional lines ; \(a ) extended line of front wheel axis ( solid lines ( a ) in fig . [fig : additional_line ] ) + this line is an extension of the front wheel axis of the robot .this line is overlaid on the background image according to the steering angle of the robot . because the operator can easily understand the center of the rotation of the robot , the robot can smoothly turn by fixing this line on the center of the corner .\(b ) predictive trajectory ( dotted lines ( b ) in fig . [fig : additional_line ] ) + this line shows the predictive trajectory of the wheel of the robot . by using the predictive trajectories, an operator can easily estimate the motion of the robot . by collimating this line with the edge of the course or the center line of the road, the robot can run without swerving from the road .we valid the effectiveness of the zoom function and the additional lines in the case of narrow communication band as explained in the section 3 .the number of subjects is 8 , and we compared three methods : ( 1)normal front camera whose angle of fov is 60 degrees(front camera ) , ( 2)existing spir without use zoom function and not add lines ( existing spir ) and ( 3 ) extended spir with zoom function and add lines(proposed spir2 ) . the outdoor experiment environmentis shown in fig .[ fig : course ] .this is a training course of a driving school whose size is about 120[m ] 80[m ] and the length of the course is about 250[m ] .one subject remote controls the robot three times for each system and each system is chosen randomly . in order to cancel the influence of the order of trials .the system configuration of this experiment is almost same as the previous experiment in the section 4.1 .we use a ugv ( unmanned ground vehicle ) as a mobile robot developed by yamaha motor co. , ltd . as shown in fig .[ fig : ugv ] . maximum transrational velocity of the ugv is 1.0 [ m / s ] in this experiment . to control the ugv easily, we use a handle and a pedal instead of a joystick in the previous experiment .as we focus on the effectiveness of the zoom function and the additional lines , the experiments has been carried out without moving objects . in the experiment , we set the limitation of the communication band with the assumption of using mobile phone communication .table [ tbl : setting ] shows the parameters of image and data which are sent from the robot to the operator station in each system .these parameters in each system are set as the best values to remote control the robot with the limitation of the bandwidth according to results of preliminary experiments .[ cols="<,<,<",options="header " , ]in this study , we proposed a solution for problem of existing teleoperation system using past image records ( spir ) . to solve the problem of existing spir that is occurred under narrow communication bandwidth ,zoom function and additional lines are installed in spir . by the outdoor evaluation experiment, we can find that the proposed system is useful under narrow communication band . from the experimental results , we find that the proposed spir reduces the operator workloads of teleoperation comparing to existing spir .we would like to extend this system to the multiple - robots system in the future .a part of the results in this research was collaboratively conducted with yamaha motor , co. , ltd .m. baker , r. casey , b. keyes , and h. a. yanco : improved interfaces for human - robot interaction in urban search and rescue , proc .2004 ieee international conference on systems , man and cybernetics , pp.2960 - 2965 ( 2004 ) naoji shiroma , and noritaka sato , and yu - huan chiu , and fumitoshi matsuno : `` study on effective camera images for mobile robot teleoperation '' , proceedings of 13th ieee international workshop on robot and human interactive communication , 2004 .laurent a. nguyen , and maria bualat , and laurence j. edwards , and lorenzo flueckiger , and charles neveu , and kurt schwehr , and michael d. wagner , and eric zbinden : `` virtual reality interfaces for visualization and control of remote vehicles '' , autonomous robots , vol .59 - 68 , 2001 .saitoh , kensaku and machida , takashi and kiyokawa , kiyoshi and takemura , haruo : `` a 2d-3d integrated interface for mobile robot control using omnidirectional images and 3d geometric models '' , proceedings of the 5th ieee and acm international symposium on mixed and augmented reality , 2006 .curtis w. nielsen , and michael a.goodrich : `` comparing hte usefulness of video and map information in navigation tasks '' , proceedings of the 1st acm sigchi / sigart conference on human - robot interaction , 2006 .maki sugimoto , and georges kagotani , and hideaki nii , and naoji shiroma , and masahiko inami , and fumitoshi matsuno : `` time follower s vision : a teleoperation interface with past images '' , ieee computer graphics and applications , vol . 25 , no .1 , pp.54 - 63 , 2005 .naoji shiroma , and hirokazu nagai , and maki sugimoto , and masahiko inami and fumitoshi matsuno : `` synthesized scene recollection for robot teleoperation '' , field and service robotics , vol .25 , pp.403 - 414 , 2006 .
teleoperation is necessary when the robot is applied to real missions , for example surveillance , search and rescue . we proposed teleoperation system using past image records ( spir ) . spir virtually generates the birds - eye view image by overlaying the cg model of the robot at the corresponding current position on the background image which is captured from the camera mounted on the robot at a past time . the problem for spir is that the communication bandwidth is often narrow in some teleoperation tasks . in this case , the candidates of background image of spir are few and the position of the robot is often delayed . in this study , we propose zoom function for insufficiency of candidates of the background image and additional interpolation lines for the delay of the position data of the robot . to evaluate proposed system , an outdoor experiments are carried out . the outdoor experiment is conducted on a training course of a driving school .
multichannel data are often encountered in scientific fields as different as geophysics , remote sensing , astrophysics or biomedical signal processing . in astrophysics for instance , the data are usually made of non - negative spectra measured at different locations .each of these spectra is a mixture of several elementary source spectra which are characteristic of specific physical entities or sources .recovering these sources is essential in order to identify the underlying components .still , both the sources and the way they are mixed up together may be unknown .the aim of non - negative blind source separation ( bss ) or non - negative matrix factorization ( nmf ) is to recover both the spectra and the mixtures .the following notations will be used throughout the article : * is the number of measurements .* is the number of samples of the spectra / sources .* is the number of sources . * and all bold capital letters are matrices . the value of element is called , row is and column is .* is the data matrix in which each row is a measurement . * the unknown source matrix in which each row is a spectrum / source . * the unknown mixing matrix which defines the contribution of each source to the measurements .* is an unknown noise matrix accounting for instrumental noise and/or model imperfections .* {\sum_{i , j}|\bx_{i , j}|^p} ] .+ consequently , gathering everything together provides an analytic solution for the searched proximal operator : +.\ ] ]* algorithm [ alg : synthesis_update ] * below implements the synthesis and non - negative update of : it converges if and ,\text{min}\left(\frac{3}{2},\frac{1 + 2/(l\gamma)}{2}\right)\right[ ] such that , . gradient computation * return * aim is to find such that : with a column vector and a matrix transform on .this proximal operator is well - defined since is a proper convex and lower semi - continuous function .using the fact that : and following the same steps as in [ app : synprox ] up to equation , we obtain the relationship , and that the computation of [ eq : anaprox ] is equivalent to solving the following problem : however , in opposition to the synthesis case , and problem does not have any analytical solution .it can however be computed using the forward - backward algorithm , with gradient and proximal ( projection onto the -ball ) .+ finally , we obtain : problem : can be reformulated under the settings of : hence , it can be written with : notice that , the dual variable has been split into two for the sake of simplicity , but they could be easily gathered into a unique variable .the algorithm then requires the knowledge of the proximal operator of which is straightforwardly given by : and the proximal operators of and which are respectively projections on the -ball ( proximal # [ prox : linf ] ) and on the non - positive constraints ( proximal # [ prox : pos ] ) .* algorithm [ alg : analysis_update ] * below therefore converges to a solution of problem [ eq : s_analysis ] if , where . if is a tight frame with , which is the case in our experiments , . , _ monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria _ , ieee transactions on audio , speech & language processing , 15 ( 2007 ) , pp .
non - negative blind source separation ( non - negative bss ) , which is also referred to as non - negative matrix factorization ( nmf ) , is a very active field in domains as different as astrophysics , audio processing or biomedical signal processing . in this context , the efficient retrieval of the sources requires the use of signal priors such as sparsity . if nmf has now been well studied with sparse constraints in the direct domain , only very few algorithms can encompass non - negativity together with sparsity in a transformed domain since simultaneously dealing with two priors in two different domains is challenging . in this article , we show how a sparse nmf algorithm coined non - negative generalized morphological component analysis ( ngmca ) can be extended to impose non - negativity in the direct domain along with sparsity in a transformed domain , with both analysis and synthesis formulations . to our knowledge , this work presents the first comparison of analysis and synthesis priors as well as their reweighted versions in the context of blind source separation . comparisons with state - of - the - art nmf algorithms on realistic data show the efficiency as well as the robustness of the proposed algorithms .
pastewka & robbins ( 2014 , pr in the following ) recently suggested a criterion to distinguish when two surfaces will stick together ( i.e. when the area - load curve bends into the tensile quadrant ) , which seems based only on fine scale quantities like rms slopes or curvatures , and argued that it conflicts with the classical criterion obtained by fuller & tabor ( 1975 , ft in the following ) using an asperity model , where instead emphasis is on rms amplitude , both for stickiness and for the value of pull - off .with beautiful atomistics simulations , pr introduce self - affine fractal roughness from a lower wavelength of order nanometers , to an upper wavelength in the micrometer to millimeter range , e.g. , where is atomic spacing .their main initial experiment is described as varying the level of adhesion and adjusting the external load as to keep constant the repulsive contact area .they find that:-\1 ) there is always a linear relation between the external load and the area in intimate repulsive contact , .a result that was shown to be robust in asperity models and was not questioned until people started to be interested in very large bandwidths roughness . defining nayak bandwidth parameter , where are the moments of order n in the random process, is magnification factor , and is hurst exponent , pr systems have for the low fractal dimensions ( ) a nayak of the order of 1600 , which is very large , and at these large bandwidths asperities coalesce and form bigger objects which are difficult to be defined by random process theory ( greenwood , 2007 ) .this leads in asperity models to an area - slope which is linear only asymptotically at large separations , and decreasing with otherwise ( carbone & bottiglione , 2008 ) ; but let us not distract the reader with this point which , in the asperity adhesive models , may tend to _ decrease _ stickiness , whereas we shall see that pr criterion introduces a bandwidth dependence which strongly _ increases _ stickiness , and with .\2 ) they find the attractive forces have little effect on the detailed morphology of the repulsive contact area , suggesting the corresponding repulsive force and mean pressure are also nearly unchanged .this suggests they are close to the derjaguin - muller - toporov ( dmt ) limit for which the repulsive pressure is unaffected by adhesive forces and hence the deformation is principally due to the repulsive forces , which in the dmt theory is given by hertz theory .\3 ) they notice that in the `` attractive '' regions , the pressure is simply the theoretical strength of the material , , where is surface energy , and is a range of attraction .this suggests a sort of dugdale - maugis model for adhesion which requires only the knowledge of the size of the region of attractive forces , . is found to be a fixed proportion of the repulsive one , by considering the first order expansion of the separation distance between two contacting bodies under repulsive forces only , which scales as distance , and equating the peak separation to the characteristic distance .notice in particular both , are written as a function of a perimeter , respectively , which is the mean over contiguous segments in horizontal or vertical slices through , for a set of circular objects , we get , instead of , which means that the representative diameter is a little smaller than the real one , . ] and , where and are the characteristic contact diameter and the additional size of attractive region , respectively , suggesting the contact area is a `` fractal '' , which requires special attention . however , at least in the limit of low bandwidths , the simpler model of circular contact areas of diameter , and circular annuli around the repulsive contact areas , should be sufficient .an asperity model would also show this if it predicts the repulsive and adhesive loads to be proportional each to the number of asperities in contact , .this will be shown to be indeed the case .in other words , the perimeter can be given by and the entire set of results continues to hold for the asperity model too .for the circular area case , in particular , the pr calculation leads to a circular attractive annulus of size an attractive load per asperity where is the compression of the asperity , suggesting this model does nt lead exactly to the dmt model for a sphere ( see maugis , 2000 ) as usually it is reported that for dmt the adhesive load on the asperity is independent on its compression and is equal to the pull off load .however , this point does nt change the main results of this discussion , and we shall take the pr model for the calculation of the asperity theory , rather than the original dmt .4 ) a condition for stickiness is found in their eqt.10 ^{2/3}<1 \label{pr_condition}\ ] ] in loose terms , pr criterion says nothing new : that for macroscopic bulk solids , adhesion at the macroscale is observed only in the case of very soft bodies of very smooth and clean surfaces , so that the length scale is sufficiently large compared to , where is plane strain elastic modulus of the material pairs , and is atomic spacing .more precisely , there is a limit in vacuum for perfectly clear surfaces of crystalline solids , for a lennard - jones potential whose interaction distance .however , it is the detail that matters . using well esthablished results , but grouping the variables using the nayak bandwidth parameter , we can restate ( [ pr_condition ] ) as ^{2/3}<1 \label{pr_parameter}\ ] ] and therefore really the condition is on rms amplitude also for pr . despite this condition does not correspond immediately to the original ft parameter ( which contains a radius of asperities ) , we shall find that a very close equation is obtained also with very simple asperity models , except that the reduction of is not obtained , which means that asperity models predict a much stronger reduction of stickiness with roughness amplitude .we can restate the basic results of the ft model in a simpler form if we consider some simplified assumptions , without changing the results qualitatively .we consider therefore an exponential distribution ( ) , and use the pr model for the behavior of each of the asperities ( [ n1 ] ) , namely the adhesive load on each asperity of radius is dependent on compression with a power - law .repeating the standard calculation of asperity models ( see johnson , 1985 ) , and the contact area as being purely given by the compressive actions , the number of asperities ( per unit area ) in contact , and the total area are unchanged with respect to the standard hertzian case without adhesion , where is total number of asperities per unit area .the total load per unit area is instead changed as pr suggest a critical importance of geometry of the contact not being `` euclidean '' , but being fractal .they find the contact area as a intricate geometry having a characteristic size which they estimate from purely geometrical considerations which has to be multiplied by a perimeter , where the dependence on the contact load enters .we try to reinterpret this result in the light of asperity model maintaining circular contact areas , and simply stating that the perimeter varies with number of asperities in contact , and is therefore a multiple of itself . dividing ( [ area ] ) by ( [ n ] ), we have an estimate of the mean diameter for the asperity model and hence seems to be dependent on non - local quantities , in contrast with ( [ drep - local ] ) . however , using well known quantities in random process theories ( see carbone & bottiglione , 2008 ) for the product which was in early days considered to be constant , but which instead varies with bandwidth , and for , we get changing bandwidth in the range used by pr ( 16 to 1600 ) , so this evaluation gives radius generally higher than pr finds .exact coincidence occurs only for .this is still a correct order of magnitude result with respect to pr calculation , and indeed pr suggest that their factor 4 is an estimate _ "deviations by up to a factor of 2 from this expression for _ _ are responsible for the spread in the figure 3 " _ , but should we attribute the scatter to a bandwidth dependence as the asperity model predicts ?it is extremely important as this assumption changes quite radically the result on the stickiness parameter .indeed , they also suggest _ " for a given system , changes in _ _ with _ _ are less than 25% over 23 decades in _ _ _ " ._ since for a given system implies a given bandwidth , pr also find indirectly that the most part of the variation is due to bandwidth , and the factor 2 they find seems surprisingly in agreement with our estimate for ( [ dmean ] ) which is indeed a factor 2 larger for large bandwidths . if we were to modify their criterion ( [ pr_parameter ] ) with this increase of _ _ with bandwidth in ( [ dmean ] ) , we would already restrict their result as ^{2/3}<1\ ] ] returning to the load equation ( [ load ] ) , it results from a difference , and hence it becomes zero when the contact becomes `` sticky '' . to compare with more advanced random process theory based asperity models ( see e.g. carbone & bottiglione , 2008 ), the term transforms into a slope parameter ( we are confusing of course here to a rms amplitude ) , and therefore there is a _sharp _ distinction between sticky and nonsticky behaviour when which is remarkably close both qualitatively and quantitatively to pr parameter ( [ pr_parameter ] ) at low bandwidths : exact coincidence would be obtained for a special bandwidth parameter , which in our crude estimate is of the order of .pr criterion also suggests that , if we consider a full self - affine spectrum of roughness , since the rms slopes and curvatures are defined only by the fine scale features , if these fine scales satisfy the criterion , it does not matter if we have this fine roughness structure as part of a much wider bandwidth of roughness , or in itself . in other words , if we start of with a fine roughness structure so that ( fig.1b ) then we can enlarge the roughness without limit if as in fig.1a .notice that in a sense , this `` removal '' of large scale roughness was done by ft in a much crude way in the sense that they had a macroscopic form , and microscopic roughness , although it is unclear how many scales of roughness they had in the microscopic scale . in their comparison with experiments, they used the reduction of pull - off with respect to the case of aligned asperities in a case like fig.1b , for their spheres .when they did compute the adhesion parameter , they only considered rms amplitude of the fine scale roughness : however , they used this reduction factor to correct the pull - off value expected for the spheres , which scales with their radius .hence , it would seem that in the more complex problem with multiscale roughness , if pr criterion is correct , we expect that pull - off can not be dependent only on this new adhesion parameter .{cc}{\includegraphics [ height=2.3146 in , width=3.6611 in ] { fig1a-full-roughness.eps } } & ( a)\\{\includegraphics [ height=2.3146 in , width=3.6611 in ] { fig1a-reduced-roughness.eps } } & ( b ) \end{array } $ ] fig.1 an example of the `` stickiness '' equivalence in pr criterion ( a ) a local fine scale roughness , on a larger wavelength structure of which we show only some parts , ( b ) the same roughness but now in itself .pr have also interesting data for pull - off in their `` supplementary information '' , which they find in error with respect to the ft prediction by several orders of magnitude and also qualitatively not in good order .first , we should note an _ error _ in the scale of their fig.s3 .pr were aiming at using the scale used by ft , the ratio of pull - off load to sum of pull - off of aligned total number of asperities , for a single asperity as in jkr theory , instead of dmt value which may be more appropriate , but this is irrelevant.]but they assumed which was correct in the old days for low bandwidths ( it is , which is 0.05 only for ) whereas they bandwidth spans the range .so , if we keep their points as they are , we should have many curves for ft , spanning a band .for the largest bandwidths , the ft curves would be almost 2 orders of magnitude higher .some points may still too `` sticky '' than what ft predicts , and especially `` stickiness '' results for a much wider range than the original ft adhesion parameter , in agreement with the main difference we found in the stickiness parameters .however , the reduction on rms amplitude needed to collapse the data in the x - axis is at most a factor 2 , whereas the new pr criterion suggests a much higher reduction , scaling with .pr suggest that their data are the `` lower bound '' of pull - off forces they can find , since these are load - dependent .would other pull - off forces be closer to a ft theory `` corrected '' with the new adhesion parameter ?it is impossible without estimating all individual bandwidths in the data .however , yet another contradiction appears from the data : the caption says or ( closed and open symbols ) , ( blue ) or ( red ) .hence , their own criterion now reads with and the case with low adhesion , suppose with , and and by no means their surfaces are so small in rms amplitude to be of atomic size . indeed , the rms amplitude they have can be estimated in this case to be . even more absurd with , bandwidth , still with same parameters , pr criterion reads . if we take now , these numbers will be multiplied by 10 , which does nt solve the problem : most point in the plot should be non - sticky _ both for their criterion and an asperity based one_. these pull - off values correspond , in the correct scale , to the pull - off of a relevant number of asperities out of the total number , and do not seem to be plausible with the parameters of roughness they have .we have pointed out that the `` parameter - free '' theory of pr may contain several important approximations which affect the stickiness criterion . in particular, their assumption of a constant factor 4 in ( [ drep - local ] ) seems problematic even in fig.3 pr show , and conflicts by the same factor as we have estimated in an asperity model .the fact that asperity models at low bandwidths correctly describe the geometry of the problem is well accepted today ( greenwood , 2007 ) , so the source of conflicts seems to be the dependence on for large bandwidth .another difference with the asperity model is hidden in assuming mean values for both diameter of repulsive contact area and size of annulus of attraction . in prparameter - free theory , is the mean diameter and does seem to take into account of the distribution of contact spot sizes , and so does .in fact , the asperity model does not need to make this approximation .if i estimate the mean size of the annulus of attraction directly from the in ( [ dmean ] ) , as , i get whereas if i estimate from the full integration process which takes into account of the distribution of contact spots sizes ( [ load ] ) as which suggests a 3 times less area of attraction . as varies from 16 to 1600, this factor 3 is not irrelevant .it may well be that these sublte differences in the factor are better captured by the pr model instead of the asperity model , but the result seems quite counterintuitive .we have reexamined the results of pr recent `` parameter - free '' theory .the parameter - free theory in fact does contain some parameters , and in particular , the estimate of the diameter of the repulsive contact areas , which deserves further attention . despite asperity theoriesare known to be possibly in error at large bandwidth parameters , many results pr find numerically do not seem in conflict , except of course the criterion for stickiness , which corresponds only in the limit of low bandwidths .the new criterion contains a curious implication , that one can take some fine scale roughness , and build on it increasingly larger wavelengths of roughness without affecting the stickiness .since a finite stickiness implies also a finite pull - off , this seems to be an interesting results , which requires further proof .unfortunately , the data they present for pull - off do not seem consistent , and do not permit conclusive discussion .carbone , g. , & bottiglione , f. ( 2008 ) .asperity contact theories : do they predict linearity between contact area and load ? .journal of the mechanics and physics of solids , 56(8 ) , 2555 - 2572 .
pastewka & robbins ( pnas , 111(9 ) , 3298 - 3303 , 2014 ) recently have proposed a criterion to distinguish when two surfaces will stick together or not , and suggested it shows a large conflict with asperity theories . it is found that their criterion corresponds very closely to the fuller and tabor asperity model one when bandwidth is small , but otherwise involves a rms ampliture of roughness reduced by a factor . therefore , it implies the stickiness of any rough surface is the same as that of the surface where practically all wavelength components of roughness are removed except the very fine ones , which is perhaps counterintuitive . the results are therefore very interesting , if confirmed . possible sources of approximations are indicated , and a significant error is found in plotting the pull - off data which may improve the fit with fuller and tabor . however , still they show finite pull - off values in cases where both their own criterion and an asperity based one seem to suggest non stickiness , and the results are in these respects inconclusive . keywords : adhesion , greenwood - williamson s theory , rough surfaces
the problem of determining the causal relationship between various interacting fields or variables is of fundamental importance in many branches of science .knowledge of the causal connection between variables is helpful for the elaboration of a realistic physical model and/or to check its validity .if one can intervene in the system under study and modulate the value of one variable , the observation of the ( delayed ) reaction of other variables to this modulation sometimes allows establishing the causal relationship . however , for various reasons many systems do not permit such intervention , or they are too complex to allow a straightforward interpretation of the observations . other techniques to uncover causal relations are based on finding precursor events , time delays between extreme events ( conditional averaging ) , correlations , etc .one may also attempt to match the system evolution to the predicted evolution from an analytic or numerical model , or to quantify parameters related to system evolution ( growth rates , damping rates , etc . ) .most of these methods , however , do not provide a direct quantification of the causal interaction between variables .even worse , linear analysis techniques ( correlations , conditional averages ) may lead to confusing or even erroneous conclusions regarding causality ( cf . the well - known adage _ ` correlation does not imply causation ' _ ) .in this situation , how must one then determine the causal relation between variables ?causality is notoriously hard to define in general . in the present work, we do not use the term ` causality ' in its philosophical , absolute sense ( if occurs , then will occur ; or : if occurs , then must have occurred ) . rather , we turn to the concept of ` quantifiable causality ' introduced by wiener ( rephrased slightly ) : _ for two simultaneously measured signals and , if we can predict better by using the past information from than without it , then we call causal to ._ this idea led to the formulation of an algorithm for the detection of the causal relation between two measured signals , denoted by _ granger causality _this algorithm , however , is based on a linear prediction of the evolution of a time series involving multivariate minimization , which is inadequate for the analysis of turbulence . although non - linear generalizations are possible and have indeed been elaborated , here we turn to a non - parametric procedure for causality detection originating in the field of information theory : the ` transfer entropy ' . in this work ,we are mainly concerned with the interaction between zonal flows and turbulence .this interaction underlies the spontaneous confinement transitions often observed in fusion plasmas , fundamental for the design of an economically attractive fusion reactor . in recent years, more or less detailed models for this interaction have become available .much effort has been invested in demonstrating the relevance of these models for describing the observations , applying advanced analysis tools such as the bicoherence an important aspect of these studies is the elucidation of the causal relation between turbulence , fluctuating zonal flows , and steady state sheared flows .this issue is often implicitly present in the relevant publications , although causality is usually treated with some respect due to the difficulty of addressing it directly ; usually , all that can be said is that a certain sequence of events is observed ( happens before ) , which is a necessary but insufficient condition for the existence of a causal relation. some ` traditional ' methods for elucidating the causal relation between zonal flows and turbulence in fusion plasmas are : ( a ) looking for spatial and temporal correlations ( i.e. , happens before ) between shear , turbulence levels , and transport ; ( b ) comparing growth and damping rates ( vs. ) or energy transfer rates ; ( c ) controlling and the shear externally and observing the effect , and ( d ) looking for ` precursors ' of a confinement transition .these methods often rely on observing variations on a slow time scale by averaging out the fast time scale of turbulent fluctuations .however , the confinement transition relies , precisely , on an interaction between the slow _ and _ fast time scales , and it may well be that this averaging operation eliminates essential information , thus precluding the clarification of causal relationships .one analysis method that avoids this issue ( of averaging out information ) is the calculation of the energy transfer involved in quadric three - wave coupling , based on the bispectrum .this energy transfer reveals the direction of energy flow in fourier space .however , bispectral techniques rely on a number of rather strong assumptions such as weak turbulence with dominant quadratic interactions and a stationary state , and requires rather lengthy time series for reliable results .by contrast , the approach discussed here , based on the transfer entropy , is more generic and less dependent on underlying assumptions .the goal of this work , therefore , is to study whether the transfer entropy technique can provide an answer to the causality questions of the type ` which variable influences which other ' in a highly non - linear situation characterized by various coupled variables or fields in a magnetically confined plasma . for this purpose, we will analyze a few model systems considered relevant and analyze experimental data from the tj - k and tj - ii stellarators .the data are selected for their relevance to the study of the interaction between zonal flows and turbulence .the structure of this paper is as follows . in section [ method ] ,we outline the method and perform some tests on relevant models of systems with non - linearly interacting variables . in section[ results ] , we present some analysis results for data obtained in fusion devices , relevant in the framework of zonal flows and confinement transitions . finally , in section [ discussion ] , we discuss the findings and draw some conclusions .consider two processes and yielding discretely sampled time series data and .the data are assumed to correspond to a stationary state ; any slow drifts of measurement signals have been removed by subjecting the time series to a suitable trend removal , if necessary .their mutual information is defined as : where is a ( joint ) probability distribution function ( pdf ) .it quantifies the mutual reduction of uncertainty of one of the variables due to knowledge of the other one , expressed in amount of bits .the mutual information if and only if and are statistically independent , in which case .thus , the mutual information detects common information content between the processes and but does not reveal the direction of information flow ( if any ) . for this purpose , the temporal structure of the data patterns must be taken into consideration .we introduce the multi - indices and , such that the indices are monotonically increasing , i.e. , and similar for .we will use the shorthand notation to indicate a set of data values preceding or coinciding with the time associated with time index , and likewise .a measure of information transfer between the two time series and is given by the _ transfer entropy _ : the sum runs over the arguments of the probability distributions ( or the corresponding bins , cf . next section ) .the reason for using multi - indices ( a minor extension of ) is to allow the possibility of including various time scales of influence on the effect variable .the transfer entropy can be rewritten in the form of a conditional mutual information .it measures the excess amount of bits needed to encode the information of the process at time point with respect to the assumption that this information is independent from .in other words , the transfer entropy is an implementation of wiener s ` quantifiable causality ' . if has no influence on the immediate future evolution of system , one has , so that . can be compared to to uncover a net information flow .using , eq .( [ te ] ) can be rewritten as thus , computing requires estimating four multi - dimensional probability distributions . here , the probability distributions appearing in eqs .( [ te]),([te1 ] ) are calculated using a discrete binning of bins in each coordinate direction .the main joint pdf has dimensions , so there are bins , and this number should be much smaller than the available length of the data arrays , , in order to obtain a statistically significant sampling of the pdf . in plasma physics applications , the available stationary time series are usually rather short ( ) .this inevitably means that , , and should all be small . choosing a small value ( 2 or 3 )is called ` coarse graining ' . for the same reason , we will set in this work .this means that and are scalar indices instead of vector indices , and that their value must be chosen judiciously in order to capture the historic information that has the most significant impact on the future evolution of the signals .this introduces the problem of selecting an appropriate value of .if the signals are oscillatory , the period of oscillation can be determined and and should correspond to a time interval less than about one quarter of the oscillation period .when the signal is not clearly oscillatory or has multiple oscillations , the ( linear ) decorrelation time can be used as a guide for choosing and . in strongly chaotic , nonlinear , or turbulent systems, it is probably better to use the mutual information to determine this value , as described in ref . . to do so, one replaces in eq .( [ mutualinfo ] ) with a delayed version of and computes for a set of delay times ; in the following , we shall refer to this quantity as the self - mutual information . in any case, the transfer entropy results should not be excessively sensitive to the precise choice of and provided the preceding guidelines are followed , which can be tested by varying their values and observing the outcome .in the remainder of this chapter , we will perform some tests using well - understood though non - trivial models .a system of coupled van der pol oscillators may exhibit chaos without external driving .such a system is described by : - \left ( x_i+\sum_j { \kappa_{ij}x_j}\right ) \end{aligned}\ ] ] the parameter determines the limit cycle of oscillator ( for ) , while specifies the non - linear coupling between oscillator and oscillator .we have run a simulation with , and meaning that there are two oscillators with slightly different limit cycles , while oscillator 2 affects oscillator 1 ( but not vice versa ) . with this choice of parameters ,the system is in a quasi - periodic state .time was integrated from to , and 10000 equally spaced data points were saved for analysis . only the time interval from to used for analysis in order to remove the initial transient phase .a section of data is shown in fig .[ pol_data ] . from a spectral analysis ,the mean oscillation period of the signals was about 6.72 , corresponding to about 67 samples per period .thus , should be chosen less than about 17 ( ) .the latter value ( 17 ) also roughly corresponds to the first minimum of the self - mutual information of .the selected data are analyzed with , .net information flow from signal to signal is computed as , where the indices correspond to the signals , respectively .the following net transfer entropy matrix is obtained ( only the part of the matrix above the diagonal is shown ; the remainder follows from antisymmetry ) : this can be represented graphically by drawing 4 dots representing the 4 signals in a plane , cf .[ pol_flow ] .the four dots are connected by arrows , such that the direction of the arrow indicates the direction of net information flow , while the width of the arrow is proportional to the value .there is a strong flow from to ( corresponding to ) .this non - trivial component corresponds to the fact that is such that oscillator 2 affects oscillator 1 ( but not vice versa ) .another strong flow is from to ( ) , for the same reason .the flow from to ( ) is trivial ( cf .( [ pol ] ) ) .the remaining arrows are smaller , so that it is clear that the information flow is dominantly from oscillator 2 to oscillator 1 .therefore , the analysis technique correctly recovers the direction of coupling among the components of the system .the stability of the analysis method was tested by computing the transfer entropy for a varying length of the data arrays , .[ pol_convergence ] shows ( the transfer entropy from signal 1 , , to signal 3 , ) and ( the transfer entropy from signal 3 , , to signal 1 , ) versus , with the analysis settings as in the previous paragraph .it is seen that converges to a stable value for , a rather modest number .note that in this case , the total number of bins of the main pdf is .the value of the transfer entropy , expressed in bits , can be calibrated against the total bit range , , implying that the coupling strength is quite significant . to understand the evolution of the transfer entropy with the analysis parameters , we calculated ( which specifies the net flow from signal 1 , , to signal 3 , ) for a range of values , cf .[ pol_netflow ] .the graph shows that the net information transfer from to is negative for ( as it should , for we know that the information transfer should go from oscillator 2 to oscillator 1 ) . for higher values of , the net flow changes sign .this occurs when crossing the quarter - period value ( 17 ) or minimum self - mutual information value ( 17 ) , and is caused by the fact that this system is quasi - periodic .it seems important , therefore , to keep well below the mentioned reference values . in the context of fusion plasmas , spontaneous confinement transitions are of prime interest . in recent years, models have been developed to describe such transitions , involving the nonlinear interaction between various fields .in this section , we will use the model of ref . to generate signals for analysis using the transfer entropy technique .the model equations are : here , represents the turbulence amplitude , the zonal flow shear , and the sheared flow .we performed a simulation run with and ( cf .fig . 5 of the cited paper ) , generating a set of 10,000 time points ( at sampling rate ) .a short section of the model output is shown in fig .[ pp_signals ] .the mean period of the quasi - periodic oscillations is 40.09 , corresponding to about 40 samples .the first minimum of the self - mutual information occurs at 8 ( for ) , 10 ( for ) or 14 ( for ) samples .thus , and should be chosen well below 8 .the transfer entropy is computed for all 9 possible combinations of signals ( ) .the settings chosen are : .the following transfer entropy matrix is obtained : comparing the values of with , one concludes that the interactions are quite strong .net information flow from signal to signal is computed as .this is again represented graphically by drawing 3 dots representing the 3 signals in a plane .the three dots are connected by arrows , such that the direction of the arrow indicates the direction of net information flow , while the width of the arrow is proportional to the value .see fig .[ pp_flow ] .the resulting diagram makes eminent physical sense : drives ( turbulence drives zonal flow ) ; drives ( zonal flow drives sheared flow ) ; drives ( sheared flow controls turbulence ) .this is precisely the order of interaction that one would expect in this type of model .of course , in this simplified case this sequence can also be obtained by simple inspection of the data ( fig . [ pp_signals ] ) , as the mutual time delays are in clear accord with this result . on the other hand ,the present method is generally applicable and allows quantifying the result .in this section , we will apply the transfer entropy technique to data obtained from various magnetic confinement devices .the data have been selected for their relevance to the study of the interaction between zonal flows and turbulence .zonal flows are large scale electrostatic potential structures that form spontaneously in magnetically confined toroidal fusion plasmas , and have zero toroidal wavenumber , small or zero poloidal wavenumber and finite radial wavenumber .the global nature of these structures makes them hard to identify , as most measurements ( of potential or radial electric field ) are local . in the following, we will not worry about the precise identification of zonal flows , but confide in earlier published analyses showing that the presented data pertain to zonal flows with high probability .our main goal here is to analyze the interaction between these hypothetical zonal flows ( identified via potential or radial electric field fluctuations ) and turbulence ( identified via the density fluctuation amplitude or radial particle transport flux ) . here, we analyze data from the tj - k stellarator , a torsatron operated at low magnetic field ( mt ) and low plasma beta .the discharge analyzed here corresponds to a helium plasma , heated by microwaves , with a central density of m and an electron temperature of ev and cold ions , as reported in more detail elsewhere .in this experiment , turbulence was dominated by electrostatic drift wave turbulence , and the total particle transport and zonal potential were found to be linked in a predator - prey cycle . among other diagnostics ,the device disposes of a set of 64 langmuir probes , distributed over a poloidal circumference of the device .the probes are configured to measure floating potential and ion saturation current in an alternating fashion , at a sampling rate of 1 mhz . from these signals , we compute the zonal potential as the mean poloidal value of the floating potential , and the global radial particle flux as the poloidal mean of the local radial particle flux , proportional to the fluctuating ion saturation current times the local poloidal electric field , as described in more detail in the cited reference . we quantify the ` global turbulence level ' by computing the root mean square ( rms ) deviation of the 32 poloidally distributed ion saturation current measurements ( ) , thus obtaining a quantifier of the turbulence level with the same time resolution as and .a short section of data is shown in fig .[ tjk_data ] ; clearly , these data are much less regular than the model data shown in the preceding section , making it considerably more difficult to understand the nonlinear relationship between the signals .we apply the analysis described above , setting ( coarse graining ) .[ tjk ] shows the transfer entropy between the two signals and as a function of .the amplitude of the transfer entropy is rather small , namely below , compared to the full bit range , which indicates that the causal link between these variables is not very strong .nevertheless , it is unexpected and interesting to observe that and peak at different values of . peaks at about 20 , while peaks at about 60 .thus , zonal potential has a rather fast impact on the total particle flux , while the total particle flux acts back on the zonal potential on a much longer time scale . in terms of net information transfer ,it flows from to for time scales less than about 40 , and in the opposite direction for longer time scales .the two distinct time scales for mutual interaction would immediately give rise to oscillatory behavior , as indeed observed .this seems coherent with the usual predator - prey models for the interaction between zonal flow and turbulence . in the standard zonal flow model, the zonal flow has an impact on the global turbulence level .[ tjk_cycle ] shows the interaction diagram between all three signals at the two most significant values of . on a short timescale ( 20 ) , the zonal flow affects the transport , and the transport in turn affects the turbulence level ( presumably , by modifying the driving gradients ) . on a longer time scale ( 60 ), the transport affects the zonal flow , but the interaction with the turbulence level is insignificant .the short time scale result is interesting , as it confirms the analysis of , where it was observed that the zonal flow does not affect the turbulence amplitude strongly , but rather it affects the transport ( as noted in the cited paper , by modifying the phase relation between density and potential fluctuations ) .the modification of the transport then affects the turbulence amplitude .although there is an arrow showing that the zonal potential also affects the turbulence level directly , its strength is much less than the indirect route via the turbulent transport .the long time scale result is presumably simply due to a restoration of ambipolarity : a modification of transport must eventually lead to a modification of potential .+ tj - ii is a heliac type stellarator with 4 field periods .the experiments discussed below have been carried out in pure neutral beam injection ( nbi ) heated plasmas ( line averaged electron density m , central electron temperature ev , ev ) .the input nbi power was about 500 kw .these discharges have been reported elsewhere in more detail . in this section, we will analyze data from the doppler reflectometry diagnostic taken as the plasma experiences spontaneous confinement transitions . in doppler reflectometry , a finite tilt angle is purposely introduced between the incident probing beam and the normal to the reflecting cut - off layer , and the bragg back - scattered signal is measured .the amplitude of the recorded signal , , is a measure of the intensity of the density fluctuations , .furthermore , as the plasma rotates in the reflecting plane ( flux surface ) , the scattered signal experiences a doppler shift .the size of this shift is directly proportional to the rotation velocity of the plasma turbulence perpendicular to the magnetic field lines , , and therefore to the plasma background velocity , provided the latter dominates over the phase velocity of density fluctuations ( cf .the doppler reflectometer signals , sampled at 10 mhz , allow determining and with high temporal resolution .first , we consider discharges in a magnetic configuration with edge rotational transform . in this configuration , a transition from l - mode to an intermediate ( i ) phase is often observed ( intermediate between the l and h modes ) . in the i - phase ,predator - prey oscillations occur , and bicoherence is relatively strong as reported elsewhere .[ 30895_ab ] shows an example of the transfer entropy for data in a 20 ms long time window in the i - phase versus ( with ) .the graph bears similarity to the corresponding graph for tj - k , fig .[ tjk ] , in that there is a clear peak in the transfer entropy curves , while dominates over for small values of .the position of the peak of the transfer entropy appears to be related to the autocorrelation time of the turbulence ( for tj - k , for tj - ii ) .thus , it is not related to the very slow predator prey cycles reported in earlier work , with a period of about a ms . in other words ,the analysis based on the transfer entropy has uncovered a novel interaction .[ series_100_35 ] shows the mean evolution of the transfer entropy for 10 discharges in this magnetic configuration . in these discharges , an l i transition occurred at a certain time , which was defined as ms .the time window ms was subjected to analysis . in this time window ,the transfer entropy was computed for successive 2 ms time sections of the signals and , using , .finally , the resulting transfer entropy curves were averaged over the 10 selected discharges .next , we consider discharges in a magnetic configuration with . in this configuration , a relatively rapid transition from l - mode to h - mode is often observed , without intermediate ( i ) phase .the average transfer entropy was computed for a number of discharges using an analogous procedure as described above , however setting at the l h transition time .[ series_101_42 ] shows the average evolution of the transfer entropy for 4 discharges in this magnetic configuration ( around the l h transition ) .the transfer entropy increases sharply by a factor of 2 at the l h transition , indicating the regulation of turbulence ( ) by the zonal flow ( ) .this regulatory phase lasts for about ms , in accord with the duration of enhanced bicoherence reported elsewhere .we draw attention to an interesting difference between the l i and l h transitions . with the l h transition , the transition is followed by a rapid increase in , while remains approximately constant .thus , the zonal flow is simply regulating the turbulence ( suppressing it ) . with the l i transition , the transition also shows a rapid increase of , but this is mirrored ( although at a lower intensity level ) by a similar increase in .this is consistent with the fact that not only does the zonal flow regulate the turbulence , but the turbulence also acts back on the zonal flow , which could be related to the observed ( predator - prey type ) oscillations . also , in the case of the l i transition , the values achieved by the transfer entropy are about 3 times higher than with the l h transition .in both cases , the amplitude of the transfer entropy is modest compared to the bit range , , although an order of magnitude above the tj - k case reported in the preceding section .in discharge 18080 , heated by electron cyclotron resonant heating ( kw ) , a triple langmuir probe was inserted to normalized radius . by raising the electron density ,a spontaneous confinement transition was provoked , and a subsequent back - transition was achieved by bringing the density down again .it should be noted that this transition is not an l h transition , but is related to a change of neoclassical root ( a local sign change of the mean radial electric field , ) . the density evolution is shown in fig . [ 18080 ] ( top ) , showing the double crossing of the critical line averaged density value . the langmuir probe measured floating potentials and ion saturation currents on various pins at a sampling rate of 1 mhz .the probe configuration allowed the computation of the fluctuating radial and poloidal electric fields , and , and the fluctuating radial particle flux .[ 18080 ] ( bottom ) shows the transfer entropy between some of these signals , computed for successive 2 ms time sections using , .interestingly , the transfer entropy is largest for the combination .this is significant , in view of the fact that this corresponds to the impact of a possible zonal flow ( ) on the radial particle flux .this quantity is seen to build up gradually before the transition , and essentially disappear during the enhanced confinement state ( ms ) .the build - up phase presumably corresponds to the gradual development and growth of a zonal flow , which however disappears when the line averaged density is above its critical value , m .the transfer entropy is also large for the combination .this is also significant , as sheared flow is produced by reynolds stress according to standard zonal flow models , which can only be large if and are phase - correlated .traditional analyses have indeed shown that this phase correlation occurs , but the present analysis adds the information that it is the zonal flow ( or poloidal velocity ) that drives the poloidal electric field ( or radial particle velocity ) , and not the other way around . after the back - transition , all quantities return approximately to their pre - transition values .it is noted that very similar results are obtained for a set of 6 similar discharges , showing that these results are robust .a similar analysis was made for two discharges with initial subcritical density in which external biasing was applied between and ms .a biasing probe was inserted about 2 cm into the plasma and biased with respect to a poloidal limiter tangent to the last closed flux surface .the triple langmuir probe was inserted to normalized radius .detailed information about these discharges can be found elsewhere .when applying positive biasing , turbulence was suppressed , leading to an improvement of confinement such that the density rose to values exceeding the critical density for spontaneous transitions ( ) ; however , contrary to the spontaneous confinement transition , here remains positive .[ 16014_16015 ] shows the evolution of and .it is clear that biasing has a strong effect on these quantities . comparing the spontaneous and biasing - induced confinement transitions, one observes that is large for with the spontaneous transition , while it is large for with the induced transition .the explanation for this apparent contradiction is related to the evolution of the mean electric field profile , and will be addressed in section [ discussion ] .the analysis of the causal relation between fluctuating variables is of prime interest when studying complex nonlinear systems , and fundamental to reach a full understanding of such systems and develop realistic models . in this work ,we use the concept of ` causality ' in the restricted sense referred to in section [ introduction ] ( wiener s ` quantifiable causality ' ) . the transfer entropy technique allows detecting a causal relation between variables that does not require very lengthy time series ( although stationary state is still a requirement ) and that does not rely on the assumption of weak turbulence .in essence , the analysis is based on the observation of a ( significant ) number of repetitive event sequences occurring in a pair of time series , which however may occur in an irregular manner . as is the case with all methods for causality detection, an important caveat is due .the method only detects the information transfer between measured variables .if the net information flow suggests a causal link between two such variables , this may either be due to a direct cause / effect relation ( in the restricted sense referred to above ) , or due to the presence of a third , undetected variable that affects both ( e.g. , with different delays , thus generating an _ apparent _ causal relation ) .thus , physical insight into the system is always needed to determine whether all relevant variables are being measured and to decide whether the net information flow actually corresponds to a causal link . when computing the transfer entropy for numerical data generated by multivariate nonlinear models , it was found that the direction of interaction between system variables could be recovered .e.g. , in the system of two coupled van der pol oscillators , it was clearly established that oscillator 2 affected oscillator 1 , but not vice versa , in accordance with the design of the system .a similar statement can be made for the simplified predator - prey model .numerical convergence of the analysis was tested and guidelines for an efficient choice of analysis parameters ( , , and ) are provided .we have explored the application of this technique to some data from turbulent fusion plasmas .the selected measurement data are relevant to the understanding of the important confinement transition , in which turbulence spontaneously generates a more ordered plasma state with reduced radial transport .the analysis of the global potential and flux at tj - k revealed the existence of two time scales : 20 and 60 . on the short time scale ,the zonal flow potential was shown to affect transport , which in turn affected turbulence .on the longer time scale , the transport affected the zonal flow potential , which was hypothesized to be due to a restoration of ambipolarity .the short time scale result is in accordance with the previous analysis of , whereas the longer time scale result is novel and reveals the potential significance of the technique to uncover new relationships .doppler reflectometry data from tj - ii taken across l i and l h transitions in nbi heated plasmas showed how the fluctuating perpendicular flow velocity , again associated with zonal flows , affects the turbulence . across the l h transition , was found to increase sharply , while the reverse interaction remained fairly constant . however , the l i transition was characterized by an increase in _ both _ these quantities ( being dominant ) .the i - phase is characterized by quasi - periodic predator - prey oscillations , which however occur on a rather slow time scale ( of the order of a ms ) compared to the interactions found here , occurring on a time scale .of course , if the predator - prey oscillations affect the turbulence , as one assumes must be the case , then this effect must be detectable on this fast time scale .this seems to be what the transfer entropy succeeds in doing .note that the interaction between the slow time scale of the predator - prey cycle and the fast time scale of the turbulence autocorrelation time was hinted at already in a previous analysis based on the bicoherence .langmuir probe data from tj - ii taken across a low - density confinement transition in an ecr heated plasma show how the transfer entropy between the fluctuating radial electric field ( associated with zonal flows ) and the particle flux , , gradually grows prior to the transition and essentially disappears once the transition has taken place .by contrast , during bias - induced transitions in ecr heated plasmas , increases sharply while biasing is applied .the observed behavior of the transfer entropy in ecr heated plasmas can perhaps be understood as follows . in the case of the spontaneous transition in ecr heated plasmas, increases gradually as the density is raised towards the critical value , which is interpreted as a gradual growth of the zonal flow amplitude .simultaneously , increases , which is interpreted as the build - up of reynolds stress , expected to produce a ( steady state ) sheared flow .however , as the density is raised slowly , the plasma adjust the profiles in an attempt to maintain the ambipolarity condition . at a certain point ,the electron root solution of the ambipolarity equation disappears . immediately prior to this point ,the flow susceptibility is large ( i.e. , small changes in the ambipolar flux are associated with large changes in ) , which can be interpreted in terms of a low neoclassical viscosity , leading to large amplitude zonal flows , consistent with the observed gradual growth of as the critical density is approached from below . following the transition to the ion root state , flow susceptibility is suddenly strongly reduced ( neoclassical viscosity is high ) , so that zonal flows are strongly damped ,which is consistent with the disappearance of both and as the critical density is crossed . in the case of the biasing discharges , and increase sharply when the biasing is activated . from previous workit is known that the externally applied radial electric field gives rise to long range correlations , which is consistent with the formation of a zonal flow , associated with the observed growth of and .a possible explanation may be that the imposed electric field enhances the ambipolar electric field value ( while remaining in the electron root state ) and correspondingly enhances the flow susceptibility , leading to zonal flow enhancement . comparing the tj - k and tj - ii results, we note that the amplitude of the transfer entropy is about an order of magnitude higher in the latter device. presumably , this corresponds to a larger zonal flow amplitude generated by a stronger drive ( steeper gradients ) . from the numerical and experimental examples examined in this work, it is concluded that the transfer entropy constitutes a powerful tool to unravel the causal relationship between nonlinearly interacting fields in complex systems .it is expected that this technique may find applications in many fields of research .research sponsored in part by the ministerio de economa y competitividad of spain under project nr .ene2012 - 30832 .fruitful discussions with j.l .velasco are gratefully acknowledged .10 y. nagashima , k. itoh , s .-itoh , a. fujisawa , k. hoshino , y. takase , m. yagi , a. ejin , k. ida , k. shinohara , k. uchara , y. kusama , and the jft-2 m group .bispectral analysis applied to coherent floating potential fluctuations obtained in the edge plasmas on jft-2 m ., 48:s1 , 2006 .tynan , m. xu , p.h .diamond , j.a .boedo , i. cziegler , n. fedorczak , p. manz , k. miki , s. thakur , l. schmitz , l. zeng , e.j .doyle , g.m .mckee , z. yan , g.s .wan , h.q .wang , h.y .guo , j. dong , k. zhao , j. cheng , w.y .hong , and l.w .turbulent - driven low - frequency sheared flows as the trigger for the h - mode transition ., 53(7):073053 , 2013 .y. xu , s. jachmich , r.r .weynants , m. van schoor , m. vergote , a. krmer - flecken , o. schmitz , b. unterberg , c. hidalgo , and textor team .long - distance correlation and zonal flow structures induced by mean shear flows in the biasing h - mode at textor . , 16:110704 , 2009 .askinazi , m.i .vildjunas , n.a .zhubr , a.d .komarov , v.a .kornev , s.v .krikunov , l.i .krupnik , s.v .lebedev , v.v .rozhdestvensky , m. tendler , a.s .tukachinsky , and s.m .evolution of geodesic acoustic mode in ohmic h - mode in tuman-3 m tokamak ., 38(3):268 , 2012 . m. xu , g. r. tynan , p. h. diamond , p. manz , c. holland , n. fedorczak , s. chakraborty thakur , j. h. yu , k. j. zhao , j. q. dong , j. cheng , w. y. hong , l. w. yan , q. w. yang , x. m. song , y. huang , l. z. cai , w. l. zhong , z. b. shi , x. t. ding , x. r. duan , and y. liu .frequency - resolved nonlinear turbulent energy transfer into zonal flows in strongly heated l - mode plasmas in the hl-2a tokamak ., 108:245001 , 2012 .van milligen , t. estrada , c. hidalgo , t. happel , and e. ascasbar .spatiotemporal and wavenumber resolved bicoherence at the low to high confinement transition in the tj - ii stellarator . , accepted ( arxiv:1211.0420 ) , 2013 .t. estrada , t. happel , l. eliseev , d. lpez - bruna , e. ascasbar , e. blanco , l. cupido , j.m .fontdecaba , c. hidalgo , r. jimnez - gmez , l. krupnik , m. liniers , m.e .manso , k.j .mccarthy , f. medina , a. melnikov , b. van milligen , m.a .ochando , i. pastor , m.a .pedrosa , f. tabars , d. tafalla , and tj - ii team . sheared flows and transition to improved confinement regime in the tj - ii stellarator ., 51:124015 , 2009 .pedrosa , c. silva , c. hidalgo , b.a .carreras , r.o .orozco , d. carralero , and the tj - ii team .evidence of long - distance correlation of fluctuations during edge transitions to improved - confinement regimes in the tj - ii stellarator ., 100:215003 , 2008 .
this work explores the potential of an information - theoretical causality detection method for unraveling the relation between fluctuating variables in complex nonlinear systems . the method is tested on some simple though nonlinear models , and guidelines for the choice of analysis parameters are established . then , measurements from magnetically confined fusion plasmas are analyzed . the selected data bear relevance to the all - important spontaneous confinement transitions often observed in fusion plasmas , fundamental for the design of an economically attractive fusion reactor . it is shown how the present method is capable of clarifying the interaction between fluctuating quantities such as the turbulence amplitude , turbulent flux , and zonal flow amplitude , and uncovers several interactions that were missed by traditional methods .
denial of service ( dos ) attacks are recognized as one of the most challenging threats to internet security .any organization or enterprise that is dependent on the internet can be subject to a dos attack , causing its service to be severely disrupted , if not fail completely .the attacker typically uses a worm to create an `` army '' of zombies , which she orchestrates to flood the victim s site with malicious traffic .this malicious traffic exhausts the victim s resources , thereby seriously affecting the victim s ability to respond to normal traffic .a network layer solution is required because the end - user or end - organization has no way to protect its tail circuit from being congested by an attack , causing the disruption sought by the attacker .for example , if an enterprise has a mbps connection to the internet , an attacker can command its zombies to send traffic far exceeding this mbps rate to this enterprise , completely congesting the downstream link to the enterprise and causing normal traffic to be dropped .network operators use conventional router filtering capabilities to respond to dos attacks .typically , an operator of a site under attack identifies the nature of the packets being used in the attack by some packet collection facility , installs a filter in its firewall / edge router to block these packets and then requests its isp to install comparable filters in its routers to remove this traffic from the tail circuit to the site .each isp can further communicate with its peering isps to block this unwanted traffic as well , if it so desires .currently , this propagation of filters is manual : the operator on each site determines the necessary filters and adds them to each router configuration . in several attacks , the operators of different networkshave been forced to communicate by telephone given that the network connection , and thus email , was inoperable because of the attack .as dos attacks become increasingly sophisticated , manual filter propagation becomes unacceptably slow or even infeasible .for example , an attack can switch from one protocol to another , move between source networks as well as oscillate between on and off far faster than any human can respond .in general , network operators are confronting an `` arms race '' in which any defense , such as manually installed filters , is viewed as a challenge by the community of attacker - types to defeat . exploiting a weakness such as human speeds of filter configuration is an obvious direction for an attacker to pursue .the concept of automatic filter propagation has already been introduced in : a router is configured with a filter to drop ( or rate - limit ) certain traffic ; if it continues to drop a significant amount of this traffic , it requests that the upstream router take over and block the traffic .however , the crucial issues associated with automatic filter propagation are still unaddressed .the real problem is how to efficiently manage the bounded number of filters available to a network operator to provide this filtering support .an attacker can change protocols , source addresses , port numbers , etc . requiring a very large number of filters .however , a sophisticated hardware router has a fixed maximum number of wire - speed filters that can block traffic with no degradation in router performance .the maximum is determined by hardware table sizes and is typically limited to several thousand .a software router is typically less constrained by table space , but incurs a processing overhead for each additional filter .this usually limits the practical number of filters to even less than a hardware router .moreover , there is a processing cost at each router for installing each new filter , removing the old filters and sending and receiving filter propagation protocol messages .given the restricted amount of filtering resources available to each router , hop - by - hop filter propagation towards the attacker s site clearly does not scale : internet backbone routers would quickly become the `` filtering bottleneck '' having to satisfy filtering requests coming from all the corners of the internet .fortunately , traceback makes it possible to identify a router close to the attacker and send it a filtering request directly . however , any filter propagation mechanism other than hop - by - hop raises a serious security issue : once a router starts accepting filtering requests from unknown sources , how can it trust that these requests are not forged by malicious nodes seeking to disrupt normal communication between other nodes ? in this paper we propose a new filter propagation protocol called aitf ( active internet traffic filtering ) : the victim sends a filtering request to its network gateway .the victim s gateway temporarily blocks the undesired traffic , while it propagates the request to the attacker s gateway .as we will see , the protocol both motivates and assists the attacker s gateway to block the attack .moreover , a router receiving a filtering request satisfies it only if it determines that the requestor is on the same path with the specified undesired traffic .thus , the filter can not affect any nodes in the internet other than those already operating at the mercy of the requestor .the novel aspect of aitf is that it enables each participating service provider to guarantee to its clients a _ specific , significant amount of protection against dos attacks _ , while it requires only a _ bounded credible amount of resources_. at the same time it is _ secure _ i.e. , it can not be abused by a malicious node to harm ( e.g. block legitimate traffic to ) other nodes . finally , it_ scales with internet size _i.e. , it keeps its efficiency in the face of continued internet growth .a _ flow label _ is a set of values that captures the common characteristics of a traffic flow e.g. , `` all packets with ip source address and ip destination address '' . a _ filtering request _ is a request to block a flow of packets all packets matching a specific wildcarded flow label for the next time units .a _ filtering contract _ between networks and specifies : 1 .the filtering request rate at which accepts filtering requests to block certain traffic to .2 . the filtering request rate at which can send filtering requests to get to block certain traffic from coming into .an _ aitf network _ is an autonomous domain which has a filtering contract with each of its end - hosts and each neighbor autonomous domain directly connected to it .aitf node _ is either an end - host or a border router in an aitf network . finally , we define the following terms with respect to an undesired flow : the _ attack path _ is the set of aitf nodes the undesired flow goes through .the _ attacker _is the origin of the undesired flow .the _ victim _ is the target of the undesired flow .attacker s gateway _ is the aitf node closest to the attacker along the attack path .similarly , the _ victim s gateway _ is the aitf node closest to the victim along the attack path .the aitf protocol enables a service provider to protect a client against undesired flows , by using only filters and a dram cache of size .the motivation is that each router can afford gigabytes of dram but only a limited number of filters . in an aitf world , each autonomous domain ( ad ) is an aitf network i.e. , it has filtering contracts with all its end - hosts and peering ads .these contracts limit the rates by which the ad can send / receive filtering requests to / from its end - hosts and peering ads .the limited rates allow the receiving router to police the requests to the specified rates and indiscriminately drop requests when the rate is in excess of the agreed rate .thus , the router can limit the cpu cycles used to process filtering requests as well as the number of filters it requires .an aitf filtering request is initially sent from the victim to the victim s gateway ; the victim s gateway propagates it to the attacker s gateway ; finally , the attacker s gateway propagates it to the attacker .both the victim s gateway and the attacker s gateway install filters to block the undesired flow .the victim s gateway installs a filter only temporarily , to immediately protect the victim , while it waits for the attacker s gateway to take responsibility .the attacker s gateway is expected to install a filter and block the undesired flow for time units .if the undesired flow stops within some grace period , the victim s gateway interprets this as a hint that the attacker s gateway has taken over and removes its temporary filter .this leaves the door open to `` on - off '' undesired flows . in order to detect and block such`` on - off '' flows , the victim s gateway needs to remember each filtering request for at least time units .thus , the victim s gateway , installs a filter for time units , but keeps a `` shadow '' of the filter in dram for time units .time units is very expensive doing so would defeat the whole purpose of pushing filtering to the attacker s gateway . ]the attacker s gateway expects the attacker to stop the undesired flow within a grace period .otherwise , it holds the right to disconnect from her .this fact encourages the attacker to stop the undesired flow .similarly , the victim s gateway expects the attacker s gateway to block the undesired flow within a grace period .otherwise , the mechanism _ escalates _ : the victim s gateway now plays the role of the victim ( i.e. , it sends a filtering request to its own gateway ) and the attacker s gateway plays the role of the attacker ( i.e. , it is asked to stop the undesired flow or risk disconnection ) .the escalation process should become clear with the example in [ example ] .thus , the mechanism proceeds in _rounds_. at each round , only four nodes are involved . in the first round , the mechanism tries to push filtering of undesired traffic back to the aitf node closest to the attacker . if that fails , it tries the second closest aitf node to the attacker and so on .the aitf protocol involves only one type of message : a _ filtering request_. a filtering request contains a _ flow label _ and a _ type _ field .the latter specifies whether this request is addressed to the victim s gateway , the attacker s gateway or the attacker .the only nodes in an aitf network that speak the aitf protocol are end - hosts and border routers .internal routers do not participate .aitf node sends a filtering request to aitf node , when wants a certain traffic flow coming through to be blocked for time units .when aitf node receives a filtering request , it checks which end - host or peering network the request is received from / through .if that end - host or peering network has exceeded its allowed rate , the request is dropped . if not, looks at the specified undesired flow label and takes certain actions : * if is the victim s gateway : 1 .it installs a _ temporary _ filter to block the undesired flow for time units . 2 .it logs the filtering request in dram for time units .it propagates the filtering request to the attacker s gateway .if the attacker s gateway does not block the flow within time units , propagates the filtering request to its own gateway . *if is the attacker s gateway : 1 .it installs a filter to block the undesired flow for time units .it propagates the filtering request to the attacker .if the attacker does not stop the flow within a grace period , disconnects from her . *if itself is the attacker , it stops the flow ( to avoid disconnection ) .we should note that the behavior described above is that of a non - compromised , non - malicious node .neither the attacker not even the attacker s gateway are expected to always conform to this behavior .aitf operation does not rely on their cooperation . in figure [ filterfig ] which stands for `` good host '' is an end - host residing in enterprise network , which is connected to local isp through router . runs a regional network that connects through its backbone router to a wide - area isp .similarly , which stands for `` bad host '' is an end - host residing in enterprise network etc . starts sending an undesired flow to . sends a filtering request to against . upon reception of s request, temporarily blocks the undesired flow but also propagates the request to . on the other side , upon reception of s request, immediately blocks the undesired flow , but also propagates the filtering request to . either stops the undesired flow or risks being disconnected .thus , if cooperates , by the end of the first round , filtering of the undesired flow has been successfully pushed to the aitf node closest to the attacker ( ) .of course , may decide not to cooperate and ignore the filtering request .then , the mechanism escalates : propagates the filtering request to . temporarily blocks all undesired traffic , but also propagates the filtering request to and so on .thus , if cooperates , by the end of the second round , filtering of the undesired flow has been successfully pushed to the second closest to the attacker aitf node ( ) . in the worst - case scenario, even refuses to cooperate . as a result, disconnects from . in a network architecture where source address spoofing is allowed ,compromised node can maliciously request the blocking of traffic from to thereby disrupting their communication . to avoid this, we add a simple extension to the basic protocol .the extension introduces two more messages : a _ verification query _ anda _ verification reply_. both types include a _ flow label _ and a _ nonce _( i.e. , a random number ) . when router receives a filtering request , which asks for the blocking of a traffic flow from attacker to victim , verifies that the request is real before taking any action to satisfy it .if is the victim s gateway , this verification is trivial with appropriate ingress filtering .if is the attacker s gateway , verification is accomplished through the following `` 3-way handshake '' : 1 .router receives a filtering request , asking for the blocking of a traffic flow from attacker to victim .2 . sends a verification query to , asking `` do you really not want this traffic flow ? '' responds to with a verification reply .the reply must include the same flow label and nonce included in the query .if the nonce on s reply is the same with the nonce on s query , accepts the request as real and proceeds to satisfy it .the `` 3-way handshake '' is further discussed in [ secure ] .aitf operation assumes that the victim s gateway can determine * who is the attacker s gateway ( in order to propagate the request ) . * who is the next aitf node on the attack path ( in order to escalate , if necessary ) .these assumptions are met , if an efficient traceback technique as those described in is available .also aitf assumes that off - path traffic monitoring is not possible i.e. , if node is not located on the path from node to node , then can not monitor traffic sent on that path ( this assumption is necessary for the `` 3-way handshake '' ) .the basic idea of aitf is to push filtering of undesired traffic to the network closest to the attacker .that is , hold a service provider responsible for providing connectivity to a misbehaving client and have it do the dirty job .the question is , why would the attacker s service provider accept ( or at least be encouraged ) to do that ?if the attacker s service provider does not cooperate , it risks being disconnected by its own service provider .this makes sense for both of them : if in figure [ filterfig ] refuses to block its misbehaving client , the filtering burden falls on .thus , it makes sense for to consider a bad client and disconnect from it . on the other hand, this offers an incentive to to cooperate and block the undesired flow . otherwise , it will be disconnected by , which will result in all of its clients being dissatisfied. moreover , aitf offers an economic incentive to providers to protect their network from the inside by employing appropriate ingress filtering .if a provider pro - actively prevents spoofed flows from exiting its network , it lowers the probability of an attack being launched from its own network , thus reducing the number of expected filtering requests it will later have to satisfy to avoid disconnection . in short, aitf creates a cost vs quality trade - off for service providers : either they pay the cost to block the undesired flows generated by their few bad clients , or they run the risk of dissatisfying their legitimate clients , which are the vast majority .thus , the quality of a provider s service is now related to its capability to filter its own misbehaving clients .the greatest challenge with automatic filtering mechanisms is that compromised node may maliciously request the blocking of traffic from to , thereby disrupting their communication .aitf prevents this through the `` 3-way handshake '' described in [ handshake ] .the `` 3-way handshake '' does not exactly verify the authenticity of a filtering request .it only enables s gateway to verify that a request to block traffic from to has been sent by a node located on the path from to .a compromised router located on this path can naturally forge and snoop handshake messages to disrupt - communication .however , such a compromised router can disrupt - communication anyway , by simply dropping the corresponding packets . in short ,aitf can not be abused by a compromised node to cause interruption of a legitimate traffic flow , unless that compromised node is responsible for routing the flow , in which case it can interrupt the flow anyway . aitf scales with internet size , because it pushes filtering of undesired traffic to the leaves of the internet , where filtering capacity follows internet growth . in most cases, aitf pushes filtering of undesired traffic to the provider(s ) of the attacker(s ) .thus , the amount of filtering requests a provider is asked to satisfy grows proportionally to the number of the provider s ( misbehaving ) clients .however , intuitively , a provider s filtering capacity also grows proportionally to the number of its clients . in short ,a provider s filtering capacity follows the provider s filtering workload .if the attacker s provider is itself compromised , aitf naturally fails to push filtering to it . instead , filtering is performed by another network , closer to the internet core .if this situation occurred often , then the scalability argument stated above would be false .fortunately , compromised routers are a very small percentage of the internet infrastructure .thus , aitf fails to push filtering to the attacker s provider with a very small probability .in this section we provide simple formulas that describe aitf performance . for lack of space and given that our formulas are very simple and intuitive , we defer any details to .aitf significantly reduces the _ effective bandwidth _ of an undesired flow i.e. , the bandwidth of the undesired flow actually experienced by the victim .specifically , it can be shown that aitf reduces the effective bandwidth of an undesired flow by a factor of where is the number of non - cooperating time units . ]aitf nodes on the attack path , is attack detection time and is the one - way delay from the victim to its gateway . is the timeout associated with all filtering requests i.e. each filtering request asks for the blocking of a flow for time units .for example , if the only non - cooperating node on the attack path is the attacker , and if the one - way delay from the victim to its gateway is msec , for min , an aitf node can reduce the effective bandwidth of an undesired flow by a factor .may be significant the first time the undesired flow is detected .here , we ignore that initial overhead .detecting a reappearing undesired flow could be as fast as matching a received packet header to a logged undesired flow label i.e. , insignificant compared to the one - way delay to the victim s gateway . ] here we only demonstrate this result for i.e. , when the only non - cooperating node on the attack path is the attacker : at time the attacker starts the undesired flow ; at time the victim detects it and sends a filtering request ; at time the victim s gateway temporarily blocks the flow and the victim stops receiving it ; the flow is eventually blocked by the attacker s gateway and released after time .thus , if the original bandwidth of the undesired flow is , its effective bandwidth is .when i.e. , when or more aitf routers close to the attacker are non - cooperating , the attacker can play `` on - off '' games : pretend to stop the undesired flow to trick the victim s gateway into removing its filter , then resume the flow etc .the victim s gateway detects and blocks such attackers by using its dram cache .an aitf node is guaranteed protection against a specific number of undesired flows , which depends on its contract with its service provider .specifically , it can be shown that if a client is allowed to send filtering requests per time unit to the provider , then the client is protected against simultaneous undesired flows . for example , for filtering requests per second and min ,the client is protected against simultaneous undesired flows .aitf enables a service provider to protect a client against undesired flows by using only filters .specifically , it can be shown that if a client is allowed to send filtering requests per time unit to the provider , the provider needs filters and a dram cache that can fit filtering requests in order to satisfy all the requests , where is the amount of time that elapses from the moment the victim s gateway installs a temporary filter until it removes it .the purpose of the temporary filter is to block the undesired flow until the attacker s gateway takes over .therefore , should be large enough to allow the traceback from the victim s gateway to the attacker s gateway plus the 3-way handshake .for example , suppose we use an architecture like , where traceback is automatically provided inside each packet .then traceback time is .if the 3-way handshake between the two gateways takes msec , for filtering requests per second and min , the service provider needs filters to protect a client against undesired flows .aitf requires a bounded amount of resources from the attacker s service provider .specifically , if a service provider is allowed to send filtering requests per time unit to a client , then the provider needs filters in order to ensure that the client satisfies all the requests . given these resources , the provider can filter simultaneous undesired flows generated by the client .for example , for filtering request per second and min , the provider needs filters for the client .this filtering request rate allows the provider to filter up to simultaneous undesired flows generated by the client .we have defined an attacker as the source of an undesired flow . by this definition ,an attacker is not necessarily a malicious / compromised node ; it is simply a node being asked to stop sending a certain flow .a legitimate aitf node must be provisioned to stop sending undesired flows when requested , in order to avoid disconnection .aitf requires a bounded amount of resources from the attacker as well .specifically , if a service provider is allowed to send filtering requests per time unit to a client , the client needs filters ( as many as the provider ) in order to satisfy all the requests .for example , for filtering request per second and min , the client needs filters .in mahajan _ et al . _ propose mechanisms for detecting and controlling high bandwidth traffic aggregates .one part of their work discusses how a node determines whether it is congested and how it identifies the aggregate(s ) responsible for the congestion .in contrast , we start from the point where the node has identified the undesired flow(s ) . in that sense ,their work and our work are complementary .another part of their work discusses how much to rate - limit an annoying aggregate due to a dos attack or a flash crowd .in contrast , our mechanism focuses on dos attack traffic and attempts to limit it to rate .the part of their work most related to ours proposes a cooperative _ pushback _ mechanism : a congested node attempts to rate - limit an aggregate by dropping a portion of its packets .if the drop rate remains high for several seconds , the node considers that it has failed to rate - limit the aggregate and asks its adjacent upstream routers to do it .if the recipient routers also fail to rate - limit the aggregate , they recursively propagate pushback further upstream .a pushback request is propagated hop by hop by the victim towards the attacker .in contrast , the propagation of an aitf filtering request involves only nodes : the victim , the victim s gateway , the attacker s gateway and the attacker we claim that this allows aitf to scale with internet size .a pushback request does not force the recipient router to rate - limit the problematic aggregate ; it relies on its good will to cooperate .in contrast , aitf forces the attacker to discontinue the undesired flow and the attacker s service provider to filter the attacker or else risk disconnection we claim that this makes aitf deployable . in park and leepropose dpf ( distributed packet filtering ) , a distributed ingress - filtering mechanism for pro - actively blocking spoofed flows .in contrast , aitf aims at blocking _ all _ undesired including spoofed flows as close as possible to their sources .thus , it can not be replaced by dpf .on the other hand , dpf blocks most spoofed flows _ before _ they reach their destination i.e. , dpf is proactive , whereas aitf is reactive . in that sense , dpf and aitfare complementary . in keromytis_ propose sos ( secure overlay services ) , an architecture for pro - actively protecting against dos attacks the communication between a pre - determined location and a specific set of users who have authorized access to communicate with that location .in contrast , aitf addresses the more general problem of protecting against dos attacks any location accessible to all internet users .finally , and propose traceback solutions i.e. , mechanisms that enable a victim to reconstruct the path followed by attack packets in the face of source address spoofing .as already mentioned , an efficient traceback mechanism is necessary to aitf operation .we presented aitf , an automatic filter propagation mechanism , according to which each autonomous domain ( ad ) has a filtering contract with each of its end - hosts and neighbor ads . a filtering contract with a neighborprovides a guaranteed , significant level of protection against dos attacks coming through that neighbor in exchange for a reasonable , bounded amount of router resources . * given a filtering contract between a client and a service provider , which allows the client to send filtering requests per time unit to the provider , the provider can protect the client against a large number of undesired flows , by significantly limiting the effective bandwidth of each undesired flow .the provider achieves this by using only a modest number of filters . *given a filtering contract between a client and a service provider , which allows the provider to send filtering requests per time unit to the client , both the client and the provider need a bounded number of filters to honor their contract .we argued that aitf successfully deals with the biggest challenge to automatic filtering mechanisms : source address spoofing .namely , we argued that it is not possible for any malicious / compromised node to abuse aitf in order to interrupt a legitimate traffic flow , unless the compromised node is responsible for routing that flow , in which case it can interrupt the flow anyway .finally , we argued that aitf scales with internet size , because it pushes filtering of undesired traffic to the service providers of the attackers , unless the service providers are themselves compromised .fortunately , compromised routers are a very small percentage of internet infrastructure .thus , in the vast majority of cases , aitf pushes filtering of undesired traffic to the leaves of the internet , where filtering capacity follows internet growth . kihong park and heejo lee . on the effectiveness of route - based packet filtering fordistributed dos attack prevention in power law internets . in _ proceedigns of acm sigcomm 2001 _ , san diego , california , usa , august 2001 .alex c. snoeren , craig partridge , luis a. sanchez , christine e. jones , fabrice tchakountio , stephen t. kent , and w. timothy strayer .hash - based ip traceback . in _ proceedigns of acm sigcomm 2001 _ , san diego , california , usa , august 2001 .
denial of service ( dos ) attacks are one of the most challenging threats to internet security . an attacker typically compromises a large number of vulnerable hosts and uses them to flood the victim s site with malicious traffic , clogging its tail circuit and interfering with normal traffic . at present , the network operator of a site under attack has no other resolution but to respond manually by inserting filters in the appropriate edge routers to drop attack traffic . however , as dos attacks become increasingly sophisticated , manual filter propagation becomes unacceptably slow or even infeasible . in this paper , we present _ active internet traffic filtering _ , a new automatic filter propagation protocol . we argue that this system provides a guaranteed , significant level of protection against dos attacks in exchange for a reasonable , bounded amount of router resources . we also argue that the proposed system can not be abused by a malicious node to interfere with normal internet operation . finally , we argue that it retains its efficiency in the face of continued internet growth .
multivariate numerical integration on hyper - rectangles of large dimension is often encountered in many areas of engineering and of physical , natural and social sciences .it is known that the accuracy of polynomial approximation techniques ( e.g. cubature rules , generally speaking ) exponentially degrades on increasing the number of variables , so that beyond these routes have to be abandoned . in these situations oneis forced to resort , at last , to statistical integration techniques framed in the broad family of monte carlo ( mc ) routes . from the 40 s of past century , when statistical sampling was conceived at los alamos laboratory by scientists like von neumann , ulam and metropolis to face practical problems in particle physics , several improvements have been done to accelerate convergence , reduce uncertainty on the estimate , and limit the degradation of the efficiency at fixed computational cost as increases . in particular , we assisted at a flowering of variants of the basic sample mean integration ( e.g. quasi - monte carlo ) , and of the classical metropolis - hastings importance sampling mc ( is - mc in the following ) which is widespread in many branches of molecular and condensed matter physics where configurational partition functions need to be calculated . for example , functions of the genz s testing - family can by routinely integrated with high accuracy and low computational cost up to hundreds of variables by means of optimized quasi - random sampling strategies . despite of the large efforts that have been done to improve statistical integration , and of the constantly increasing computational power at disposal , there is still need for efficient algorithms which are robust regardless of the peculiar kind of integrand function . in this work we present a tool to perform multivariate integration by exploiting a markovian stochastic exploration of the integration domain while the integrand function is morphed in a controlled ( deterministic ) way .the strategy is rooted in an abstract interpretation of jarzynski s equality ( je in the following ) , which was derived about fifteen years ago in the context of the thermodynamics of systems ( mainly macromolecular ) subjected to thermal fluctuations and driven out of equilibrium by an external mean which has full control on a selected set of structural parameters . since a detailed description of the je on physical groundswould be not pertinent to the spirit of the present communication , we address the interested reader to the excellent reviews of refs . which comprise theory and experiments .we shall give here below only the essential lines to appreciate the integration methodology that we are going to present .in the essence , if denotes the helmholtz free - energy of a system at thermal equilibrium and _ constrained _ in a certain state specified by a set of controllable parameters , say , the free - energy change from state `` 1 '' to state `` 2 '' is where and are the so - called canonical configurational partition functions . explicitly , these are the integrals made over all _ unconstrained _ structural variables of the system , which fluctuate due to the contact with the thermal bath . is the configuration - dependent energy of the system ( here for two states `` 1 '' and `` 2 '' corresponding to and , respectively ) , is the absolute temperature , and the boltzmann constant .the je allows to evaluate by avoiding the explicit calculation of the partition functions , which becomes highly demanding ( or even unfeasible ) as the number of variables grows .namely , the je states that the free - energy difference can be cast into the exponential average of the amount of work performed by an external mean to drive the system from the equilibrium state `` 1 '' to the state `` 2 '' along non - equilibrium transformations where the controlled state - parameters are changed according to a prescribed ( but arbitrarily chosen ) protocol of finite duration .explicitly , the je is , where the average is taken over the ensemble of _ stochastic _ trajectories , each generated while the state - parameters are deterministically changed .we stress that , in thermodynamics , the work is identified with the amount of energy exchanged between the external mean and the system through a detailed action on some degrees of freedom of the system ( the set in this case ) ; thus , is obtained by accumulating the infinitesimal contributions to be evaluated ( i.e. , measured in experiments by knowing the applied forces , or calculated if trajectories are simulated ) at each instant along the specific trajectory .the je is valid under the mild conditions of _ i _ ) fluctuations of the uncontrolled variables is a markovian ( memory - less ) process , and _ ii _ ) the trajectory would sample the underlying canonical distribution proportional to after the protocol was stopped at some state .the je has been extensively applied to construct entire free - energy profiles between states connected by real , simulated , or even artificial steered trasformations .typical examples are real mechanical unfolding / refolding of biopolymers ( e.g. , rna hairpins ) by means of laser tweezers , simulated detachment of chemicals from binding sites of proteins , and virtual `` alchemical '' transformations of molecular moieties to evaluate solvation free - energies in a given environment .the strength of the je is that an accurate estimate of ( which means an accurate estimate of the ratio ) could be achieved from a limited number of runs starting from initial configurations drawn from the pool belonging to the same equilibrium state , and employing always the same protocol . in the experimental context , the je offers the remarkable link between a measurable quantity ( the work ) and the change of free - energy for a nanoscale system . on computational grounds ,the effort required by the je machinery to compute the ratio from a set of simulated transfomations is usually much lower than that required to compute the single partition functions by direct integration , and the accuracy of the outcome remains acceptable even when standard routes , like the metropolis is - mc , fail . in the present work we `` borrow ''the je and adopt it out - of - context to efficiently solve multidimensional integrals , just exploiting ( by analogy with physical transformations ) the possibility to substitute the explicit integration with the evaluation of what we can term the `` computational work '' in doing the `` externally controlled '' build - up of the integrand function ( that we shall term `` morphing '' in the following ) starting from an easily integrable known profile . by viewing the canonical configurational partition functions of initial and final states ( physical context ) as nothing but multidimensional integrals of positive - valued functions ( computational context ) , and the steered non - equilibrium transformations on few system parameters while all the remaining , uncontrolled , degrees of freedom continue to fluctuate ( physical context ) as the analogous of the morphing of the integrand _ while _ the integration domain is stochastically explored by markovian moves compatible with the actual function landscape itself ( computational context ) , then an adaptation / exploitation of the je is straight devised as a tool to perform multidimensional integration . in a nave picture of adaptive nature of markov dynamics over an evolving landscape , sampling the integration domain _ while _ the features of the integrand function grow following an arbitrarily prescribed protocol is more efficient than exploring directly the given integrand landscape . to the best of our knowledge ,all applications of the je are strictly pertinent to the original physical contexts ( mainly chemical , namely the estimation of mean - field - potentials along specific coordinates of complex molecular or supra - molecular systems ) . with the present contributionwe intend to enucleate from the je , taking out the physical traits , its essential computational feature of machinery to perform multidimensional integration in efficient way .although apparently trivial once formulated , we believe that our abstract rephrasing may disclose the real powerful of the je and bring it to a broader audience than the community of researchers active in physical - chemical areas . herewe stress the crucial point that in statistics , namely in practices to sample from general distributions , the strategy named annealed importance sampling ( ais ) due to neal ( see also refs . and for recent reviews ) leads to an analogous of the je .being targeted to the sampling from a `` complicate '' ( eg ., multimodal ) distribution , the ais consists in morphing an initial easy - to - sample distribution up to the target one by filling the gap with a freely chosen number of bridging distributions .surprisingly , the ais seems to be almost unknown in the physical - chemical community , apart of few exceptions . by following neal s work, one can find a direct matching with jarzynski s relation when the developing of the intermediate distributions is translated in terms of transformation protocol ; on the other way around , the je framework gives to the ais a `` physical guise '' where the concept of `` work '' is the key - feature .once such a connection is made , our feeling ( and auspice ) is that the huge amount of expertises achieved in latter decade in the physical - chemical area can be trasferred to the development of pure numerical methods of stochastic integration .suggestions will be given in the course of our exposition , namely _i ) _ the possibility to generate paths via stochastic differential equations ( langevin - like equations ) in place of monte carlo chains , and hence to be inspired by the huge amount of studies of molecular brownian - like dynamics in condensed fluid phases ; _ ii ) _ devise optimal morphing protocols to improve accuracy and precision of the integral estimate , in analogy with what is done in molecular practices ( both simulations and experiments ) to reduce the energy dissipation during a steered transformation ; _ iii ) _ take benefit from the good practices , developed for the free - energy - difference evaluations , to control / estimate the errors on the outcome ; _ iv ) _ be aware of the vast physico - chemical literature of the last decade , where smart improvements of the basic implementation of the je are proposed ( the interested reader can find a valuable overview in ref . , and specific indications will be given in the section `` outlines and perspectives '' ) . in developing our exposition we shall present a simple mean to perform the function morphing , namely the rising of the whole integrand function from an initial flat profile .in addition , with a simple trick we shall leave the context of positive - valued functions to account for general profiles of the integrands .besides of drawing the main methodological lines , we also developed the jemdi ( jarzynski equality multidimensional integration ) library , an optimized and easy to use c++ algorithm implementing our approach to stochastic integration , which is freely distributed as open - source software for tests and further developments .the article continues as follows .first we provide a general outline about how to frame jarzynski s equality in the abstract context of integration .then we opt for the morphing protocol from a flat profile , which is the most safer and case - independent choice if the details of the integrand are unknown . in section `` computational issues '' we briefly present the jemdi software which currently implements such a choice ; details of the numerical solverare provided in the supplementary material .then we test the algorithm on model cases , and give the proof of its outperforming efficiency when compared to the standard is - mc route ( i.e. , the direct evaluation of the integral without morphing ) .this is not surprising for experts in je ( or ais ) , since non - equilibrium routes are known to achieve a likely result in a reasonable computational time while such a standard counterpart completely fails .furthermore we provide an estimate of the uncertainty and a criterion to judge the reliability of the outcome . in our tests ,exploration of the integration domain will be made mostly by means of is - mc moves , but we give also an example where langevin dynamics are employed .the final section is devoted to remarks and perspectives .in this section we shall present the integration strategy rooted in the abstract formulation / usage of jarzynski s equality .we start by considering the physically framed case of positive - valued integrand functions. then we extend the method to general integrands that may have zeros and/or sign changes within the integration domain . throughout the text we make free use of physical terminology by leaving to the reader the effort to keep any step at the most abstract level .let us first consider the case of a real and positive - valued function , with a set of parameters defining its profile and the argument as a -dimensional array of real variables .the function must be bounded and continuous in the interior of the integration domain with the further requirement that at boundaries the primitive function has a finite limit if diverges ( see discussion in section [ sec4 ] ) .our purpose is to set up an efficient route to determine the integral if the integral is known for a certain set of parameters , say , one can write where is the morphing factor , related to the change of into , to be determined .let us imagine to randomly pick a point from the distribution , and drive the morphing in a deterministic way according to an arbitrarily chosen protocol where is treated as time variable varying from zero to one ( as a matter of fact , it is nothing but a dimensionless progression variable ) ; `` during '' such a transformation , be free to explore the integration domain by following a general type of stochastic markov dynamics , , where is the propagation step . by producing a large number of these transformationseach conducted by employing the same protocol , starting from different initial states all sampled from the distribution , a non - equilibrium distribution will develop .the requirement of markov dynamics is important to assure that if the morphing was stopped at a certain and the dynamics continued , the distribution would relax to the underlying target distribution proportional to , that is .consider now the following path - integral along the -th stochastic trajectory , where we have introduced we can state that the morphing factor in eq .can be expressed by the following limit taken over an infinitely large number of trajectories : the proof simply rests on the direct recognition that eq . corresponds to the je if the following identifications are done : _ i _ ) and are homologous to the configurational partition functions evaluated over the canonical distributions respectively originated by the pseudo - potentials and ; _ ii _ ) the morphing of the function corresponds to a steered transformation following the deterministic protocol ; _ iii _ ) the markovian exploration of the domain is equivalent to the markovian dynamics over the uncontrolled degrees of freedom in the physical context of non - equilibrium transformations ; _ iv _ ) the path - integral in eq .is interpreted as the work to morph the function while the integration domain is stochastically sampled ; _ v _ ) eq .is the analogous of the work - exponential - average in the proper je , being equivalent to the free - energy difference between the morphed and the initial states .such a one - to - one mapping of our computational problem into the physical context allows us to take _ directly _ eq . with eqs . - as an _ exact _ result right on the basis of the sound validity ( as theorem ) of the je itself .the interested reader may find a transparent derivation of the je , for example , in section ii of ref .looking at such a derivation , the tight connection with neal s ais method will appear ( see sec . 2 of ref . ) ; namely , by taking a continuum of bridging distributions each labeled by , eq . 5 of ref . gives exactly the factors , and eq .2 yields ( for the specific application ) the je estimator on a finite number of transformations .the above formulation is rather general .notice that , unlike the original physical context where the energetics of the system and the dynamical responses are often fixed by the nature of the sample , here there is plenty of room to choose the reference state for which is known , to optimize the protocol ( meaning that also single parameters , , could be varied independently and in different ways ) , and to choose / optimize the kind of markov exploration of the integration domain .the target is to achieve numerical convergence on with the lowest number of trajectories of shortest length ( i.e. , number of elemental propagation steps , see below ) .insights will be given in what follows .the choice of the evolution law is guided , in the computational context , only by the need of generating a markov chain which ensures that if the morphing is stopped then the dynamics would settle over the stationary equilibrium distribution as discussed above ._ any _ propagator able to create such a markov chain can be employed in the algorithm .regardless of the specific kind of evolution law , reflecting conditions have to be applied at the boundaries of .a straightforward method to create a markov chain is to explore the integration domain by means of is - mc moves of maximum length in each dimension . in practice ,the trajectory is broken into steps of equal `` duration '' .a generic step begins at and ends at , and consists of a first part where the function morphing proceedes for the duration at the location frozen , and a second part where a move is meant to happen instantaneously up to .in schematic form we have .an unbiased move is first attemped with the sole limitation that .the move is readily accepted if it is downhill , i.e. , if . on the contrary , a random number is uniformly generated between 0 and 1 and the move is accepted if , rejected otherwise . with such a classical scheme due to metropolis _ , the requirement of relaxation to the underlying canonical distribution after stopping the morphing is automatically fulfilled by construction .work is performed only in the morphing part of a step , thus .the global amount per trajectory is then obtained by summing over the steps , which corresponds to take the discretized formulation of the integral in eq . along the path .the is - mc scheme is a fast and simple procedure that enables long - range rapid exploration and only requires the evaluation of the integrand function at each step .alternatively , evolution laws based on stochastic differential equations can be adopted , which require to supply also first - order derivatives ( which must be bounded and continuous in ) of the integrand function .the simplest and physically - framed propagation scheme of such a kind makes use of a langevin - like equation corresponding to a brownian - like exploration of the integration domain , i.e. _ { \left \lvert \begin{array}{l } \scriptsize{\bm{x}=\bm{x}({\hat{t } } ) } \cr \scriptsize{{\bm{\lambda}}={\bm{\lambda}}({\hat{t } } ) } \end{array } \right.}\end{aligned}\ ] ] where is the vector whose entries are independent sources of white noise : and , with the identity matrix , the dirac s delta - function , and are ensemble averages over the distribution of noise magnitudes . in practice , at the actual , for each variable one generates where is a value randomly sampled from a distribution ( usually gaussian , but not necessarily so , see remarks in the supplementary material ) with zero mean and unit variance . finally , in eq . is a `` diffusion matrix '' which can be freely designed to be both point - dependent and deterministically modulated along the morphing , under condition that it must be real - valued , symmetric and positive - definite .this freedom can be exploited , in principle , to optimize in subtle way the stochastic exploration of the integration domain . as for the is - mc evolution, a single step of duration is meant to be constituted by a morphing part followed by a langevin propagation .the infinitesimal amount of work is evaluated exactly as for the is - mc case , and the work per trajetory follows by summing all contributions .the choice of the reference state is crucial to improve the efficiency of integration .a good balance should be found between closeness of to ( intuitively this would reduce the amount of dissipation in analogy with the physical steered transformations ) , simplicity of its integration ( possibly analytical ) to get , and capability to sample initial configurations quickly and without artefacts from the distribution ( such a difficulty is lowered if a factorization is recognized in the integrand function ) .clearly , only a well educated guess may lead to identify a suitable integrable function of which is intended to be a perturbed form . on the contrary , a dangerous bias might be introduced . in the absence of some_ a priori _ knowledge , the simplest and safer choice is to start from a flat profile of the associated potential , that is for all , so that being the volume of the integration domain . in this casethe initial sampling reduces to an unbiased random drawing of independent points in the -dimensional space .a proper value of can be set as where is a guess ( even very rough ) of the integral ; with such a choice the morphing factor is expected to fall close to one .assessment of the outcome reliability on statistical grounds takes benefit of the sound experience gained in the context of the je applied to free - energy - difference calculations in physical ( mainly molecular ) systems .having generated trajectories , the best estimate of the integral is for such a relation is _ exact _ , while for any finite number of transformations the estimate bears an error , being the true value , which is due to the limited sampling of the low - work `` wing '' of the distribution of work values , . by looking at eq . , important trajectories which largely contribute are those with lower values of ; on the other hand , these trajectories are rarely encountered ( because of the low values in the distribution wing ) , hence their frequency of appearance , in a statistical sample of finite size , may largely deviate from the actual probability . in particular, the distribution of ( by supposing to repeat infinite times the calculation with a set of trajectories ) displays an average shift , that is a -dependent systematic error , plus a broadening which arises from an intricate interplay between the ( unknown ) features of the work - distribution - function and the finiteness of the ensemble of work values at disposal .an indication about the systematic error can be inferred from the zuckerman - woolf theory developed for free - energy - difference calculations .it was already known that , _ on average _ , one gets an overestimation ( see eq .56 of ref . ) , but the authors were able to derive a `` universal relation '' ( valid under mild conditions ) which links the systematic error to the variance of the outcomes . turning to our context , if we propagate back such an uncertainty to the integral ( we recall that the morphing factor is equivalent , in the essence , to the exponential of minus a free - energy - difference ) , the result for sufficiently high is that the integral is _ on average _ underestimated by where is the standard deviation of the distribution of outocomes ( compare with eq .17 of ref .this relation tells us that if , then ( accurate outcome ) and ( precision estimator ) can be taken as a likely estimate of the interval of confidence . in the followingwe shall indicate with such an uncertainty .at first instance one may evaluate from the raw outcomes as ^{1/2}\ ] ] where is seen as average over entries .the estimate can be eventually improved , for example , by means of resampling procedures from the dataset at disposal , such as the `` bootstrap '' route . herewe pursue a different choice borrowed from the practice of `` blocks averages '' in free - energy - difference calculations .it consists in randomly splitting the whole dataset with entries into groups ( in our tests , these groups are formed by consecutive trajectories as they are generated ) , and then taking ^{1/2}\end{aligned}\ ] ] where the morphing factors are computed with the trajectories of each -th block .is the estimate of the standard deviation of the mean , evaluated from the spread of partial outcomes ; the formula is meant to be improved via t - student correction for low values of . clearly , the result depends on the choice of , although such a dependence is weak when is of the order of few tens .the idea is to choose yielding blocks which are supposed to be large enough to provide `` sensible '' estimates of the integral .we stress that the evaluation of from the data at disposal can be highly inaccurate due to the poor sampling of , so that the estimated ratio could result ( apparently ) small even if the systematic error is relevant .therefore one needs independent criteria to establish if is well sampled to allow one to take as reliable indicator of accurate integration .a criterion has been provided by jarzynski ( see note 23 of ref . ) , hummer ( section iv of ref . ) , and reaffirmed by others : good sampling of the low - work wing of is likely attained if , where is the standard deviation of the work values ( still to be estimated , unfortunately , from the finite set of data at disposal ) .if such a condition is not fulfilled , one may slower the transformation protocol and/or increase . herewe pursue a different empiric criterion which is applicable when the is - mc route is employed to make the markov exploration of the integration domain ( in section [ sec4 ] we shall validate its effectiveness ) .namely we evaluate the average percentage of accepted moves over the whole ensemble of paths and over the whole morphing schedule , .then by borrowing the recommendation for standard metropolis mc practices , we simply check if is around 50% .this follows by the guess that an efficient sampling of the integration domain ( although on average ) is associated to a slow - enough transformation ( a `` quasi - static '' one in the thermodynamics language ) , hence to low `` dissipation '' ( energy dissipation in the thermodynamics acceptation , see remarks in section [ sec5 ] ) , and hence to a precise / accurate outcome .here we shall tailor the above general scheme to the case of homogeneous morphing controlled by a unique parameter varying from to .moreover , the basic constant - rate linear schedule is employed : .being interested in evaluating the integral , we set , , and } ] , ] .the exact result in 3 variables obtained with dqand is 4595.90 .satisfactory convergence of the integration has been obtained with , partitioned into blocks to evaluate the uncertainty , , and by applying and for the parameters discussed in section [ sec2sub6 ] .the outcome in 15 variables was versus the exact value of , while in 30 variables we obtained versus the exact value of .computational times were respectively 2.6 and 4.7 hours for parallel runs with the 12-processors machine .these results still confirm that the method is able to integrate problematic and complex functions without particular problems .the main purpose of this work was to bring the computational essence of jarzynski s equality to the pure numerical context of multivariate integration devoid of any physical trait . before indicating some lines of investigation , we remark that here we have presented the very basic jarzynski s strategy , as it was presented in the early 1997 article .we have outlined the main ideas and presented the basic formulation presently implemented in our jemdi c++ routine , which revealed to be high - performing in a series of tests . in particular , our new approach where is - mc moves are combined with underlying function morphing , offers a chance to evaluate multidimensional integrals in an acceptable computational time . there may be critical situations ( depending on the number of variables , features of the integrand , extensions of the integration domain ) where the convergence rate is still too low , but the present strategy is expected to be much better performing than the standard counterpart .not last , the tool provides the statistical uncertainty on the outcome and the criterion to guess if the bias error is negligible ; this information is essential to judge ( and control ) the reliability of the result .there is still plenty of room for developments , mainly at the following three levels not yet explored : a ) selection of a good reference state from which the integrand morphing develops , b ) optimization of the markovian propagator , c ) tuning of the morphing protocol ( i.e. , use of non - linear and multi - parameter growth of the pseudo - potential ) and devising methods to minimize `` dissipation '' .the first point should be , in our feeling , the most effective item towards large improvements .notice that , from the ais perspective , adopting the flat reference state corresponds nothing but to adopt the simplest starting distribution which allows to get independent initial draws and which fully satifies ( regardless of the integrand function ) neal s indications : `` easier to sample from , and which is broad enough to encompass all potential modes '' ( quotation taken from sec . 9 of ref . ) . concerning the point b ) , we recall that the exploitation of langevin dynamics deserves consideration since it can be a mean for efficient exploration of the integration domain if intuition can address to a proper tailoring of the point - dependent ( and possibly also progression - dependent ) diffusion matrix .this opens a wide landscape to explore ; an interested investigator can stand on the huge literature about single - molecule dynamics in the overdamped ( diffusive ) regime of motion with a configuration - dependent friction matrix able to affect the pattern of stochastic trajectories giving rise , for example , to saddle - point avoidance phenomena in the multidimensional pseudo - potential landscape ( the logarithm of the positive - valued integrand function ) . still concerning the setting of mcmoves , a promising route coming from non - equilibrium simulations of molecular systems under steered transformations is that proposed by chelli who combined the `` configurational freezing '' scheme with the `` preferential sampling '' approach .the idea is that in steered physical systems , energy dissipation is mainly due to fluctuations close to the `` hot region '' where the external intervent takes place .this led to conceive a scheme where particles are moved within a mobility area ( roughly , a solvation shell ) which encloses the hot region , with a rule to select with preference the particles which are closer to the hot region ; particles moves are made in the way that the resulting chain is markovian and detailed balance is guaranteed , as required for the applicability of work fluctuations theorems like jarzynski s equality here treated .model cases treated in ref . were alchemical transformations of a water molecule into a methane molecule in the solvation environment , and formation of molecular dimers with solvation .however , how to transfer such a concept to multidimensional integration is a challenging target : by analogy , we guess that the `` mobility region '' , and the `` hot region '' within it , should be subsets of the whole variables to be determined through a sensitivity - like analysis applied to the pseudo - potential ( in fact , is the analogous , in stochastic thermodynamics , of the the infinitesimal amount of energy exchanged as heat ) .about the item c ) , a starting point could be the work of schmiedl and seifert on the construction of optimal protocols , based on the criterion to minimize the average work performed along the non - equilibrium trasformations . on physical grounds ( second principle of thermodynamics for isothermal systems at the nanoscale ) ,the average work is higher than the free - energy - difference by an amount that corresponds to the energy which is dissipated , on average , in driving the transformation .it can be demonstrated that such a dissipation is linked to the spread of work values and hence , ultimately , to the precision and bias of the outcome when jarzynski s estimator ( eq . ) is applied on a finite number of realizations .all considerations made for the physical problems can be transferred to the abstract context of multidimensional integration .a further suggestion to improve the basic route has been proposed by vaikuntanathan and jarzynski . inwhat they called `` escorted '' transformations , an artificial flow - field of the form ( our notation ) which `` suitably '' couples stochastic variables and controlled parameters , is added to bias the trajectories in the way to minimize ( even up to let vanish ) the dissipation .a reformulation of the je has been derived by the authors to account for the presence of such a flow - field ( see eqs 10 , 13 and 14 of the cited work ) .as the authors argue , trial and error experience could lead to contruct optimal schedules for classes of physical systems ; we say that the same would hold also for `` classes '' ( to be properly defined ) of integrand functions .at last we mention the interesting idea to sample the paths ( the trajectories ) according to their weight in the exponential average ( see section [ sec2sub4 ] ) ; in the method named single - ensemble nonequilibrium path - sampling ( seps ) , a properly biased sequence of paths is generated using the work as the variable in a metropolis mc scheme .all these ideas have been tested on low - dimensional cases , mostly uni- or bi - dimensional , but their application to high - dimensional cases would need to face the formidable problem of setting some key - ingredients case by case : optimal protocol , optimal flow - field , optimal bias function of the low - work wing of the work distribution function . on the contrary ,our basic implementation of jarzynski s equality has the merit to be directly applicable with the only need , as usual in computational practices , to check convergence on the outcomes .finally we like to mention our recent extension of the strategy here presented to the evaluation of nested sums over a large number of indexes with positive / negative addends ( a calculation impossible to tackle by exhaustive evaluation of each addend ) .this can be seen as the discrete counterpart of the multidimensional integration . in this case , the efficiency of the addends morphing ( still in combination with the je ) can be quantified , and appreciated , by looking at the incredibly small ratio between the number of required addends evalutations versus the total number of addends .+ + * acknowledgments * calculations were run on the hpc hardware of the `` centro di chimica computazionale di padova '' ( c3p ) hosted at the department of chemistry of the university of padova .j. liphardt , s. dumont , s. b. smith , i. tinoco , jr . , c. bustamante , equilibrium information from nonequilibrium measurements in an experimental test of jarzynski s equality , science 296 ( 2002 ) 1832 - 1835 .p. nicolini , d. frezzato , c. gellini , m. bizzarri , r. chelli , towards quantitative estimates of binding affinities for protein - ligand systems involving large inhibitor compounds : a steered molecular dynamics simulation route , j. comput .34 ( 2013 ) 1561 - 1576 .on the other hand , it can not be said _ a priori _ if the basic implementation of the non - equilibrium je is superior to the many evoluted `` equilibrium '' strategies nowadays well assessed to evaluate the ratios , like the non - boltzmann biased sampling known as umbrella sampling , or even to the more traditional thermodynamic integration and free - energy - perturbation routes ( in this regard , see the analysis presented in ref . and references therein ) .interestingly , the je embeds , as limit cases , the latter mentioned methods : the thermodinamic integration ( known since the early formulation of kirkwood [ j. chem .phys . 3 ( 1935 )300 - 313 ] ) is recovered in the limit of infinetely slow modulation of , while the free - energy - perturbation ( due to zwanzig [ j. chem .( 1954 ) 1420 - 1426 ] ) corresponds to an instantaneous change of between the end states .we like to mention that the morphing strategy has been recently presented by us in the context of free - energy calculations in complex molecular systems [ see m. zerbetto , a. piserchia , d. frezzato , j. comput .35 ( 2014 ) 1865 - 1881 ] .in such a case one performs the morphing of the whole energy landscape of the system .although this strategy is still a novelty in the physical - chemical literature , notice its analogy with the so - called alchemical transformations . turning for a while to the ais conterpart of the je , we remark that a similar conclusion was already formulated by neal who stated that `` [ ... ] nightmare scenarios in which wrong results are obtained without there being any indication of a problem are possible [ ... ] this can occur when the distribution of the importance weights has a heavy upward tail that is not apparent from the data collected '' ( quotation taken from sec. 9 of ref . ; neal s weights correspond to the terms , hence he refers to our low - work wing ) .h. oberhofer , c. dellago , p. l. geissler , biased sampling of nonequilibrium trajectories : can fast switching simulations outperform conventional free energy calculation methods ? , j. phys . chem . 109( 2005 ) 6902 - 6915 .calculations have been performed with two machines at disposal at the c3p facility of the padova university ( www.chimica.unipd.it/licc ) : 4 nodes with 4 intel woodcrest dual core 2.6 ghz processors each ( connected via infiniband ) , and a single - node with 12 processors intel xeon l5640 2.27 ghz .c c c c c c n var & & & & % error & time / s15 & 5000 & & & 0.66 & 70 & & & & -0.17 & 606 & 50000 & & & -0.05 & 597 & & & & 0.03 & 537630 & 5000 & & & 0.54 & 117 & & & & -0.41 & 1129 & 50000 & & & -0.48 & 1022 & & & & -0.14 & 1002560 & 5000 & & & 3.05 & 221 & & & & -1.43 & 2195 & 50000 & & & 0.92 & 1998 & & & & -0.28 & 2163490 & 5000 & & & -14.95 & 312 & & & & -1.29 & 3076 & 50000 & & & -4.70 & 3363 & & & & -0.003 & 30106 ( see text ) over the domain from -3 to + 3 per each variable , with ( a ) 15 , ( b ) 30 , ( c ) 60 , and ( d ) 90 variables ; values are given versus the maximum length of is - mc moves per each dimension , .estimates are obtained with 5000 is - mc trajectories , each of length steps ( partition into blocks is taken to estimate the errors ) .horizontal lines show the exact values respectively of , , , and .labels on points indicate the average percentage acceptance of moves . ]( see text ) over the domain from -3 to + 3 per each variable , with ( a ) 15 , ( b ) 30 , ( c ) 60 , and ( d ) 90 variables ; values are displayed versus .estimates are obtained with 5000 langevin trajectories , each of length steps ( partition into blocks is taken to estimate the errors ) .horizontal lines show the exact values respectively of , , , and . ] vs. the number of langevin trajectory steps for the case variables displayed in figure 2 , and for three values of the diffusion coefficient : = 0.03 , 0.3 , 30.0 . has been evaluated with respect to a starting point randomly generated only once , and then applied in all the three calculations . ]
we present a computational strategy for the evaluation of multidimensional integrals on hyper - rectangles based on markovian stochastic exploration of the integration domain while the integrand is being morphed by starting from an initial appropriate profile . thanks to an abstract reformulation of jarzynski s equality applied in stochastic thermodynamics to evaluate the free - energy profiles along selected reaction coordinates via non - equilibrium transformations , it is possible to cast the original integral into the exponential average of the distribution of the pseudo - work ( that we may term `` computational work '' ) involved in doing the function morphing , which is straightforwardly solved . several tests illustrate the basic implementation of the idea , and show its performance in terms of computational time , accuracy and precision . the formulation for integrand functions with zeros and possible sign changes is also presented . it will be stressed that our usage of jarzynski s equality shares similarities with a practice already known in statistics as annealed importance sampling ( ais ) , when applied to computation of the normalizing constants of distributions . in a sense , here we dress the ais with its `` physical '' counterpart borrowed from statistical mechanics . + dipartimento di scienze chimiche , universit degli studi di padova , via marzolo 1 , i-35131 , padova , italy + * diego.frezzato.it
quantum fault tolerance is a comprehensive framework which promises successful quantum computation despite errors to individual computational elements provided the error rate is below a certain threshold .this framework has been extensively researched over the past 15 years resulting in detailed rules on how to implement all elements of a quantum computation in a fault tolerant manner .one of the basic elements of quantum fault tolerance is to ensure that an error that occurs on one qubit can not spread to multiple qubits .application of quantum error correction ( qec ) then corrects the single error . utilizing the entire framework of quantum fault tolerance in a practical quantum computation, however , promises to be a difficult and expensive proposition in terms of the number of physical qubits required and the number of physical gates implemented .thus , it is worthwhile to explore the possibility of relaxing some of the strict rules required by the framework .the calderbank - shor - steane ( css ) codes , a subclass of stabilizer quantum error correction codes , have proven to be very useful for the purposes of quantum fault tolerance .the reason for this is that clifford gates can be performed in a bit - wise fashion .however , clifford gates alone can not be used to implement universal quantum computation .an additional gate such as the -gate , also known as the -gate , or toffoli gate is necessary .these additional gates can not be performed in a bit - wise fashion and thus turn out to the be the most difficult part of a fault tolerant quantum computation . in this paperwe explore the implementation of the -gate on a [ 7,1,3 ] quantum error correction code ( or steane code ) , the most simple of the css codes .the -gate is a single qubit phase shift with the matrix representation : its fault tolerant implementation for the [ 7,1,3 ] quantum error correction code requires an ancilla logical qubit in the state where and are the logical zero and one basis states .a controlled - not ( cnot ) gate is then implemented between the ancilla and data qubits , the physical qubits storing the encoded logical qubit of information , with the ancilla as the control .the data qubits are measured and , if the measurement outcome is zero , the ancilla state is projected into the intial state of the data qubits with an applied -gate . if the measurement outcome is a one , a not gate must be applied to the ancilla qubits to attain the desired outcome .the cnot is a clifford gate and can thus be applied bit - wise between the data and ancilla qubits. the not gate , if necessary , is also a clifford gate .thus , the most difficult part of implementing a the -gate is the encoding of the state . in this paperwe analyze three methods of constructing the encoded state to see which can be used to implement usable -gates for fault tolerant quantum computation . by` usable ' we mean that the fidelity of the gate after application of perfect error correction has no first order error terms , i.e. , all errors that occur during the gate are in principle correctable .the first method follows the tenets of fault tolerance as detailed in .a logical state is constructed via error correction on a state of all zeros .measurement of the logical zero projects the qubits into .both the error correction and measurement are performed following the rules of fault tolerance .in addition , as part of the adherence to the rules of fault tolerance , the measurement is repeated until the same result is obtained twice in a row .the second method is to instead construct via the encoding gate sequence of .this construction does not follow the rules of fault tolerance .nevertheless , we have previously shown that gate encoded logical states may be useable for quantum computation ( see also ) .after the gate encoding , the logical zero state is projected into the desired state following the tenets of fault tolerance as per the first method .the third method is to use the gate encoding sequence to directly encode the single qubit state into the state .the first method describes a procedure which completely adheres to the rules of fault tolerance and thus the implemeted -gate is expected to be usable for fault tolerant quantum computation .the second method does not follow fault tolerance procedure in constructing the logical zero but does follow them for measurement . the final method does not follow the rules of fault tolerance for any of its sub - protocols . we show that despite the lack of complete adherence to the rules of fault tolerance , the second method implements a logical -gate with fidelities comparable to that of the first method ( the ` fault tolerant method ' ) while the third method does not .specifically , the gate fidelity of the logical -gate implemented via the second method after perfect error correction does not have any first order error terms and should thus be usable for fault tolerant quantum computation .this implies that , in general , it may not be necessary to strictly and completely adhere to the tenets of quantum fault tolerance in order to implement fault tolerant quantum computation .the error model used in this paper is a non - equiprobable pauli operator error model with non - correlated errors . as in ,this model is a stochastic version of a biased noise model that can be formulated in terms of hamiltonians coupling the system to an environment . in the model used here, however , the probabilities with which the different error types take place is left arbitrary : the environment causes qubits to undergo a error with probability , a error with probability , and a error with probability , where , are the pauli spin operators on qubit . for example , a single qubit gate , performed in such an environment on a qubit in the state undergoes the following evolution : where is the identity matrix , , and the terms can be regarded as kraus operators for the single qubit gate .similarly , a two - qubit gate , implemented in this environment on a two qubit state actually implements : where terms can be regarded as the 16 kraus operators .note that errors on the two qubits taking part in the two qubit gate are independent and not correlated .we assume that only qubits taking part in a gate operation will be subject to error .qubits not involved in a gate are assumed to be perfectly stored .while this represents an idealization , it is partially justified in that it is generally assumed that stored qubits are less likely to undergo error than those involved in gates ( see for example ) .in addition , in this paper accuracy measures are calculated only to second order in the error probabilities thus the effect of ignoring storage errors is likely minimal .finally , we note that non - equiprobable errors occur in the initialization of qubits to the state and measurement ( in the or bases ) of all qubits .the method for encoding a logical zero state in the steane code following the rules of fault tolerance is to apply error correction following the rules of fault tolerance to 7 qubits all initially in the state .were the initialization perfect there would be no need to perform the bit - flip syndrome measurements as they will have no effect on the state of the qubits . due to the non - equiprobable error environment , however , the initial state of the qubits will not be but .neverthless , due to risk of doing more harm than good , we choose not to apply the bit - flip syndrome measurements , and instead apply the phase - flip syndrome measurements only ( this is done twice to conform with the strictures of fault tolerance ) .we analyze the scenario where all syndrome measurements yield a zero .because encoding is done ` off - line ' one can choose to utilize only the encoded states with this outcome .error correction following the rules of fault tolerance requires proper ancilla qubits to determine the syndrome measurement .we choose four - qubit shor states for ancilla as they require the least number of qubits and are thus most likely to be experimentally accessible in the short term .shor states are simply greenberger - horne - zeilinger ( ghz ) states with hadamard gates applied to each qubit . because the shor states themselves are constructed in a noisy environment ( here the nonequiprobable error environment ) , verification via parity checks on pairs of qubits is necessary to ensure accurate construction . in consonance with the results of apply one verification .circuits for shor state construction and verification , and the circuit for the three bit - flip syndrome measurements used to encode the logical zero state are shown in fig .[ ft ] . ) on the control qubit and ( ) on the target qubit connected by a vertical line . represents a hadamard gate .the procedure entails constructing a ghz state which is verified using an ancilla qubit .hadamard gates are applied to each qubit to complete shor state construction .bottom : phase - flip syndrome measurements for the [ 7,1,3 ] code following fault tolerance procedures using shor states .to ahere to the tenets of fault tolerance , each shor state ancilla qubit must interact with only one data qubit .the error syndrome is determined from the parity of the measurement outcomes of the shor state ancilla qubits .note that the shor states utilized are without the final hadamard gates and thus we reverse the roles of the control and target qubits and measure the ancilla in the -basis as explained in . following fault tolerance procedure each of the syndrome measurements is repeated twice.,width=321 ] after constructing a logical zero state we are ready to project into the state . to do this following the rules of fault tolerance we need a seven qubit shor state .the shor state construction , shown in fig .[ shor7 ] , is done in the non - equiprobable error environment and employs three verification steps .we then apply controlled - m gates given by : with the shor state qubits as control and the logical zero state qubits as targets .measurement of the shor state ( with even parity outcome ) completes the projection and the construction of the logical state .the entire procedure is done until the same syndrome is attained twice in a row to ensure no errors have taken places during the projection . ., width=302 ] the second method we use to construct the logical is the same as the fault tolerant method except that instead of using a logical zero state encoded via the rules of fault tolerance we use a logical zero constructed from the encoding gate sequence of ref . shown in fig .[ gsenc ] .this construction does not follow the tenets of fault tolerance ; an error on one qubit can easily spread to other qubits . the final method used to construct the logical is to directly encode the one qubit state , which requires applying a hadamard gate and a -gate to the first qubit and then follow the gate encoding sequence above . againthis method does not follow the rules of fault tolerance . however , it is the shortest and most direct method of constructing the logical state .we have simulated all of these construction methods in the non - equiprobable error environment described above .after construction of the logical we apply a cnot gate between it and a perfectly encoded arbitrary state .the cnot is implemented in the non - equiprobable pauli error environment .the encoded arbitrary state is then ( noisily ) measured and , assuming the measurement outcomes have even parity the output of the -gate on the arbitrary encoded state is found on the qubits initially in the logical state ( if the measurement outcomes yield odd parity a not gate must be applied ) . we will eventually replace the arbitrary state with the four states necessary to simulate one qubit quantum process tomography .this will allow us to calculate a logical gate -matrix .we invoke a number of accuracy measures to compare the three -gate implementation methods .the goal of these measures is to determine the possibility of using the various -gate implementations in fault tolerant quantum computation .the first accuracy measures are the fidelity of the constructed seven - qubit state , where is the state resulting from simulating our state construction methods , and the fidelity of the one - qubit logical state calculated by perfectly decoding , given by .these fidelities , shown in table [ tab1 ] , suggest how well the state construction process is carried out but may not indicate how accurately the -gate utilizing this state will perform .a couple of interesting points are demonstrated by the fidelity results .first , the third method , in which no part of the state construction follows the rules of fault tolerance , in general produces higher fidelity states than the other two methods .second , when implementing either of the first two methods the errors are significantly less important than the other two error types .this is not true for the third method . [cols="^,^,^,^",options="header " , ] [ tab3 ] the gate fidelities follow the same trends found in arbitrary output state fidelities .they demonstrate the usability for quantum computation of not only the fault tolerant method -gate but also the -gate implemented using a gate encoded logical zero state . finally , we note that the fidelity after noisy error correction is not necessarily better than the fidelity before noisy error correction .this should not be taken to mean that error correction is not necessary .we have already seen that fidelity at one stage of a computation does not translate into optimum performance at a later stage of the computation . the most we can say is that this may suggest that error correction need not be applied after every computational step .this will be explored in future work .the simulations presented in this work shed light on a number of issues related to fault tolerance .first , we question whether every element of a computation must indeed be constructed in a manner consistent with the rules of fault tolerance as outlined , for example , in to be usable for practical quantum computation .we define a ` useable ' operation as one that , after perfect error correction , has no remaining first order error probability terms in fidelity .the -gate is explored since , as a non - clifford gate , it is the most difficult to implement for a css code and is therefore most in need of shortcuts .we have found that the logical zero ancilla called for need not be constructed following the tenets of fault tolerance and that gate sequence encoded logical zero will yield a usable -gate .this provides a significant savings of number of gates , time of operation , and qubits .however , a direct gate encoding of the state will not yield a usable operation .additional simulations demonstrate that another attempted shortcut , applying the projection of the logical zero state into the state only once and not repeating it to attain the same measurement results twice in a row , will also not yield a usable -gate . throughoutwe have looked at fidelities of the gate implementation and in this way determined if a gate is ` usable . 'are usable gates fault tolerant ? can they be used for arbitrarily long computations without undue build up of errors ? though we can not prove equivalence between usable and fault tolerant there is evidence implying that this is the case .first , when perfect error correction is applied to a usable gate it achieves perfect fidelity ( to at least second order ) .this implies that the gate is fault tolerant as error correction will fix all errors .second , upon noisy error correction the usable gate has a fidelity equivalent to the gate implemented via the fault tolerant method implying that any errors from previous protocols will be washed out by later ones and again that the gate is fault tolerant .a second issue is the need to perform error correction after every step in a computation .while this issue must be fully addressed elsewhere we would like to make three points here .first , in the fault tolerant method we did not apply the bit - flip syndrome measurements for logical zero state encoding even though initialization was peformed imperfectly . this lack does not appear to have negatively affected any results .the fault tolerant method still yielded usable gates .second , we did not apply quantum error correction to the logical zero states in either the fault tolerant method or gate - encoded logical zero method and still the logical zero states led to implementing useable -gates .third , applying realistic ( noisy ) error correction after implementation of the -gate has not improved the fidelity of the operation . if anything it makes it worse . perhaps , applying a few operations before error correction would not severely harm the fidelity of the operations .thirdly , we would like to point out the utility of the logical -matrix in evaluating the accuracy of the -gate performance .the -matrix is easily transformed into kraus operators which properly describe the one qubit sequence : perfect encoding , implementation of -gate , perfect decoding .such kraus operators may be useful for simulations of quantum fault tolerance . in conclusion, we have explored the possibility of utilizing non - fault tolerant methods to implement -gates for fault tolerant quantum computation .we have shown that when certain elements of fault tolerant protocols are relaxed , the -gate can still be implemented in such a way such that ( ideal ) error correction would correct errors to first order ( in the fidelity ) .relaxing other elements of fault tolerance , however , would cause the gate to be unusable .further work is necessary to outline a general model as to what elements are ` non - essential ' in this way .while this work was done in the context of the [ 7,1,3 ] quantum error correction code we believe it would be immediately applicable to other css codes and possibly to more general codes .our work also has implications for the question of how often error correction must be applied during a fault tolerant quantum computation , and we have begun to explore the utility of logical qubit kraus operators for fault tolerant simulations .j. preskill , proc .a * 454 * , 385 ( 1998 ) .shor , _ proceedings of the the 35th annual symposium on fundamentals of computer science _ , ( ieee press , los alamitos , ca , 1996 ) .d. gottesman , phys .a * 57 * , 127 ( 1998 ) .p. aleferis , d. gottesman , and j. preskill , quant .inf . comput .* 6 * , 97 ( 2006 ) .m. nielsen and i. chuang , _ quantum information and computation _ ( cambridge university press , cambridge , 2000 ) .shor , phys .a * 52 * , r2493 ( 1995 ) .a.r . calderbank and p.w .shor , phys .a * 54 * , 1098 ( 1996 ) ; a.m. steane , phys .lett . * 77 * , 793 ( 1996 ) .a.m. steane , proc .a * 452 * , 2551 ( 1996 ) .weinstein , phys . rev . a * 84* , 012323 ( 2011 ) .buchbinder , c.l .huang , and y.s .weinstein , quant .* 12 * , 699 ( 2013 ) . v. aggarwal , a.r .calderbank , g. gilbert , y.s .weinstein , quant .inf . proc . * 9 * , 541 ( 2010 ) .p. aliferis and j. preskill , phys .a * 78 * , 052331 ( 2008 ) .svore , b.m .terhal , d.p .divincenzo , phys .a * 72 * , 022317 ( 2005 ) .y.s . weinstein and s.d .buchbinder , phys .a * 86 * , 052336 ( 2012 ) .i.l . chuang and m.a .nielsen j. mod. opt . * 44 * , 2455 ( 1997 ) .
we simulate the implementation of a -gate , or -gate , for a [ 7,1,3 ] encoded logical qubit in a non - equiprobable error environment . we demonstrate that the use of certain non - fault tolerant methods in the implementation may nevertheless enable reliable quantum computation while reducing basic resource consumption . reliability is determined by calculating gate fidelities for the one - qubit logical gate . specifically , we show that despite using a non - fault tolerant procedures in constructing a logical zero ancilla to implement the -gate the gate fidelity of the logical gate , after perfect error correction , has no first order error terms . meaning , any errors that may have occurred during implementation are ` correctable ' and fault tolerance may still be achieved .
in the fields of audio signal processing and hearing research , continuous research efforts are dedicated to the development of optimal representations of sound signals , suited for particular applications . however ,each application and each of these two disciplines has specific requirements with respect to _ optimality _ of the transform . for researchers in audio signal processing ,an optimal signal representation should allow to extract , process , and re - synthesize relevant information , and avoid any useless inflation of the data , while at the same time being easily interpretable .in addition , although not a formal requirement , but being motivated by the fact that most audio signals are targeted at humans , the representation should take human auditory perception into account .common tools used in signal processing are linear time - frequency analysis methods that are mostly implemented as filter banks . for hearing scientists , an optimal signal representation should allow to extract the perceptually relevant information in order to better understand sound perception . in other terms , the representation should reflect the peripheral `` internal '' representation of sounds in the human auditory system .the tools used in hearing research are computational models of the auditory system .those models come in various flavors but their initial steps in the analysis process usually consist in several parallel bandpass filters followed by one or more nonlinear and signal - dependent processing stages . the first stage , implemented as a ( linear ) filter bank , aims to account for the spectro - temporal analysis performed in the cochlea .the subsequent nonlinear stages aim to account for the various nonlinearities that occur in the periphery ( e.g. cochlear compression ) and at more central processing stages of the nervous system ( e.g. neural adaptation ) .a popular auditory model , for instance , is the compressive gammachirp filter bank ( see sec .[ ssec : audfilters ] ) . in this model , a linear prototype filter is followed by a nonlinear and level - dependent compensation filter to account for cochlear compression . because auditory models are mostly intended as perceptual analysis tools , they do not feature a synthesis stage , i.e. they are not necessarily invertible .note that a few models do allow for an approximate reconstruction , though .it becomes clear that filter banks play a central role in hearing research and audio signal processing alike , although the requirements of the two disciplines differ .this divergence of the requirements , in particular the need for signal - dependent nonlinear processing in auditory models , may contrast with the needs of signal processing applications .but even within each of those fields , demands for the properties of transforms are diverse , as becoming evident by the many already existing methods .therefore , it can be expected that the perfect signal representation , i.e. one that would have all desired properties for arbitrary applications in one or even both fields , does not exist .this manuscript demonstrates how _ frame theory _ can be considered a particularly useful _ conceptual _ background for scientists in both hearing and audio processing , and presents some first motivating applications .frames provide the following general properties : _ perfect reconstruction _ , _ stability _ , _ redundancy _ , and a _ signal - independent , linear inversion procedure_. in particular , frame theory can be used to analyze any filter bank , thereby providing useful insight into its structure and properties . in practice , if a filter bank construction ( i.e. including both the analysis and synthesis filter banks ) satisfies the frame condition ( see sec . [sec : erbfb ] ) , it benefits from all the frame properties mentioned above . why are those properties essential to researchers in audio signal processing and hearing science ?* perfect reconstruction property : * with the possible exception of frequencies outside the audible range , a non - adaptive analysis filter bank , i.e. one that is general , not signal - dependent , has no means of determining and extracting exactly the perceptually relevant information .for such an extraction , signal - dependent information would be crucial .therefore , the only way to ensure that a linear , signal - independent analysis stage , possibly followed by a nonlinear processing stage , captures all _ perceptually relevant signal components _ is to ensure that it does _ not lose any _ information at all .this , in fact , is _ equivalent to being perfectly invertible _ , i.e. having a perfect reconstruction property .thus , this property benefits the user even when reconstruction is not intended per - se .note that in general `` being perfectly invertible '' need not necessarily imply that a concrete inversion procedure is known . in the frame case , a constructive method exists , though .* stability : * for sound processing , stability is essential in the sense that , for the analysis stage , when two signals are similar ( i.e. , their difference is small ) , the difference between their corresponding analysis coefficients should also be small . for the synthesis stage , a signal reconstructed from slightly distorted coefficients should be relatively close to the original signal , that is the one reconstructed from undistorted coefficients . from an energy point of view , signals which are similar in energy should provide analysis coefficients whose energy is also similar .so the respective energies remain roughly proportional . in particular , considering a signal mixture , the combination of stability and linearity ensures that every signal component is represented and weighted according to its original energy .in other terms , individual signal components are represented proportional to their energy , which is very important for , e.g. , visualization . even in a perceptual analysis , where inaudible components should not be visualized equally to audible components having the same energy ,this stability property is important . to illustrate this ,recall that the nonlinear post - processing stages in auditory models are signal dependent .that is , also the inaudible information can be essential to properly characterize the nonlinearity .for instance , consider again the setup of the _ compressive gammachirp _model where an intermediate representation is obtained through the application of a linear analysis filter bank to the input signal .the result of this linear transform determines the shape of the subsequent nonlinear compensation filter .note that the _ whole _intermediate representation is used .consequently , the proper estimation of the nonlinearity crucially relies on the signal representation being accurate , i.e. _ all _ signal components being represented and appropriately weighted .this _ accurateness _ comes for free if the analysis filter bank forms a frame .* signal - independent , linear inversion : * a consistent ( i.e. signal - independent ) inversion procedure is of great benefit in signal processing applications .it implies that a single algorithm / implementation can perform all the necessary synthesis tasks . for nonlinear representations , finding a signal - independent procedure which providesa stable reconstruction is a highly nontrivial affair , if it is at all possible . with linear representations ,such a procedure is easier to determine and this can be seen as an advantage of the linearity .the linearity provided by the reconstruction algorithm also significantly simplifies separation tasks . in a linear representation , a separation in the coefficient ( time - frequency ) domain , i.e. before synthesis , is equivalent to a separation in the signal domain .such a property is highly relevant , for instance , to computational auditory scene analysis systems that , to some extent , are sound source separators ( see sec . [sec : casa0 ] ) .* redundancy : * representations which are sampled at critical density are often unsuitable for visualization , since they lead to a low resolution , which may lead to many distinct signal components being integrated into a single coefficient of the transform .thus , the individual coefficients may contain information from a lot of different sources , which makes them hard to interpret .still , the whole set of coefficients captures all the desired signal information if ( and only if ) the transform is invertible .redundancy provides higher resolution and so components that are separated in time or in frequency can be separated in the transform domain .furthermore , redundant representations are smoother and therefore easier to read than their critically sampled counterparts . moreover , redundant representations provide some resistance against noise and errors .this is in contrast to non - redundant systems , where distortions can not be compensated for .this is used for de - noising approaches . in particular , if a signal is synthesized in a straight - forward way from noisy ( redundant ) coefficients , the synthesis process has the tendency to reduce the energy of the noise , i.e. there is some noise cancellation .+ besides the above properties , which are direct consequences of the frame inequalities , the generality of frame theory enables the consideration of _ additional important properties_. in the setting of perceptually motivated audio signal analysis and processing , these include : * perceptual relevance : * we have stressed that the only way to ensure that all perceptually relevant information is kept is to accurately capture all the information by using a stable and perfectly invertible system for analysis .however , in an auditory model or in perceptually motivated signal processing , perceptually irrelevant components should be discarded at some point .if only a linear signal processing framework is desired , this can be achieved by applying a perceptual weighting and a masking model , see sec .[ sec : psychotheory ] .if a nonlinear auditory model like the compressive gammachirp filter bank is used , recall that the nonlinear stage is mostly determined by the coefficients at the output of the linear stage .therefore , all information should be kept up to the nonlinear stage . in other words , discarding information already in the analysis stage might falsify the estimation of the nonlinear stage , thereby resulting in an incorrect perceptual analysis .we want to stress here the importance of being able to _ selectively _ discard unnecessary information , in contrast to information being _ involuntarily lost _ during the analysis and/or synthesis procedures . * a flexible signal processing framework : * all stable and invertible filter banks form a frame and therefore benefit from the frame properties discussed above . in addition , using filter banks that are frames allows for flexibility .for instance , one can gradually tune the signal representation such as the _ time - frequency resolution _, analysis filters _ shape _ and _ bandwidth _ , _ frequency scale _, _ sampling density _ etc . , while at the same time retaining the crucial frame properties . it can be tremendously useful to provide a single and adaptable framework that allows to switch model parameters and/or transition between them . by staying in the common general setting of filter bank frames , the linear filter bank analysis in an auditory model or signal processing schemecan be seen as an exchangeable , practically self - contained block in the scheme .thus , the filter bank parameters , e.g. those mentioned before , can be tuned by scientists according to their preference , without the need to redesign the remainder of the model / scheme .such a common background leads to results being more comparable across research projects and thus benefits not only the individual researcher , but the whole field .two main advantages of a common background are the following : first , the properties and parameters of various models can be easily interpreted and compared across contributions ; second , by the adaption of a linear model to obtain a nonlinear model the new model parameters remain interpretable .* ease of integration : * filter banks are already a common tool in both hearing science and signal processing . integrating a filter bank frame into an existing analysis / processing frameworkwill often only require minor modifications of existing approaches .thus , frames provide a theoretically sound foundation without the need to fundamentally re - design the remainder of your analysis ( or processing ) framework .+ _ in some cases , you might already implicitly use frames without knowing it . in that case , we provide here the conceptual background necessary to unlock the full potential of your method . _ + the rest of this chapter is organized as follows : in section [ sec : psychotheory ] , we provide basic information about the human auditory system and introduce some psychoacoustic concepts . in section [ sec : frameth ] we present the basics of frame theory providing the main definitions and a few crucial mathematical statements . in section [ sec : erbfb ]we provide some details on filter bank frames .the chapter concludes with section [ sec : appli ] where some examples are given for the application of frame theory to signal processing in psychoacoustics .this section provides a brief introduction to the human auditory system .important concepts that are relevant to the problems treated in this chapter are then introduced , namely auditory filtering and auditory masking . for a more complete description of the hearing organ ,the interested reader is referred to e.g. .the human ear is a very sensitive and complex organ whose function is to transform pressure variations in the air into the percept of sound .to do so , sound waves must be converted into a form interpretable by the brain , specifically into neural action potentials .[ fig : earanatomy ] shows a simplified view of the ear s anatomy .incoming sound waves are guided by the pinna into the ear canal and cause the eardrum to vibrate .eardrum vibrations are then transmitted to the cochlea by three tiny bones that constitute the ossicular chain : the malleus , incus , and stapes .the ossicular chain acts as an impedance matcher .its function is to ensure efficient transmission of pressure variations in the air into pressure variations in the fluids present in the cochlea .the cochlea is the most important part of the auditory system because it is where pressure variations are converted into neural action potentials .the cochlea is a rolled - up tube filled with fluids and divided along its length by two membranes , the reissner s membrane and basilar membrane ( bm ) . a schematic view of the unrolled cochlea is shown in fig .[ fig : earanatomy ] ( the reissner s membrane is not represented ) .it is the response of the bm to pressure variations transmitted through the ossicular chain that is of primary importance .because the mechanical properties of the bm vary across its lengths ( precisely , there is a gradation of stiffness from base to apex ) , bm stimulation results in a complex movement of the membrane . in case of a sinusoidal stimulation ,this movement is described as a traveling wave .the position of the peak in the pattern of vibration depends on the frequency of the stimulation .high - frequency sounds produce maximum displacement of the bm near the base with little movement on the rest of the membrane .low - frequency sounds rather produce a pattern of vibration which extends all the way along the bm but reaches a maximum before the apex .the frequency that gives the maximum response at a particular point on the bm is called the `` characteristic frequency '' ( cf ) of that point . in case of a broadband stimulation ( e.g. an impulsive sound like a click ), all points on the bm will oscillate .in short , the bm separates out the spectral components of a sound similar to a fourier analyzer .the last step of peripheral processing is the conversion of bm vibrations into neural action potentials .this is achieved by the inner hair cells that sit on top of the bm .there are about 3500 inner hair cells along the length of the cochlea ( mm in humans ) .the tip of each cell is covered with sensor hairs called stereocilia .the base of each cell directly connects to auditory nerve fibers .when the bm vibrates , the stereocilia are set in motion , which results in a bio - electrical process in the inner hair cells and , finally , in the initiation of action potentials in auditory nerve fibers .those action potentials are then coded in the auditory nerve and conveyed to the central system where they are further processed to end up in a sound percept . because the response of auditory nerve fibers is also frequency specific and the action potentials vary over time , the `` internal representation '' of a sound signal in the auditory nerve can be likened to a time - frequency representation .because of the frequency - to - place transformation ( also called tonotopic organization ) in the cochlea , and the transmission of time - dependent neural signals , the bm can be modeled in a first linear approximation as a bank of overlapping bandpass filters , named `` critical bands '' or `` auditory filters '' .the center frequencies and bandwidth of the auditory filters , respectively , approximate the cf and width of excitation on the bm .noteworthy , the width of excitation depends on level as well : patterns become wider and asymmetric as sound level increases ( e.g. ) .several auditory filter models have been proposed based on the results from psychoacoustics experiments on masking ( see e.g. and sec .[ ssec : masking ] ) .a popular auditory filter model is the gammatone filter ( see fig [ fig : gammatones ] ) .although gammatone filters do not capture the level dependency of the actual auditory filters , their ease of implementation made them popular in audio signal processing ( e.g. ) .more realistic auditory filter models are , for instance , the roex and gammachirp filters .other level - dependent and more complex auditory filter banks include for example the dual resonance non - linear filter bank or the dynamic compressive gammachirp filter bank .the two approaches in feature a linear filter bank followed by a signal - dependent nonlinear stage . as mentioned in the introduction ,this is a particular way of describing a nonlinear system by modifying a linear system .finally , it is worth noting that besides psychoacoustic - driven auditory models , mathematically founded models of the auditory periphery have been proposed .those include , for instance , the wavelet auditory model or the `` earwig '' time - frequency distribution .the bandwidth of the auditory filters has been determined based on psychoacoustic experiments .the estimation of bandwidth based on loudness perception experiments gave rise to the concept of bark bandwidth defined by where denotes the frequency and denotes the bandwidth , both in hz .another popular concept is the equivalent rectangular bandwidth ( erb ) , that is the bandwidth of a rectangular filter having the same peak output and energy as the auditory filter .the estimations of erbs are based on masking experiments .the erb is given by and are commonly used in psychoacoustics and signal processing to approximate the auditory spectral resolution at low to moderate sound pressure levels ( i.e. 3070 db ) where the auditory filters shape remains symmetric and constant .see for example for the variation of with level . based on the concepts of bark and erb bandwidths , corresponding frequency scales have been proposed to represent and analyze data on a scale related to perception .to describe the different mappings between the linear frequency domain and the nonlinear perceptual domain we introduce the function where is an auditory unit that depends on the scale .the bark scale is and the erb scale is both auditory scales are connected to the ear s anatomy .one aud unit indeed corresponds to a constant distance along the bm . 1 corresponds to 1.3 mm while 1 corresponds to 0.9 mm .the phenomenon of masking is highly related to the spectro - temporal resolution of the ear and has been the focus of many psychoacoustics studies over the last 70 years .auditory masking refers to the increase in the detection threshold of a sound signal ( referred to as the `` target '' ) due to the presence of another sound ( the `` masker '' ) .masking is quantified by measuring the detection thresholds of the target in presence and absence of the masker ; the difference in thresholds ( in db ) thus corresponds to the _ amount of masking_. in the literature , masking has been extensively investigated in the spectral or temporal domain .the results were used to develop models of spectral or temporal masking that are currently implemented in audio applications like perceptual coding ( e.g. ) or sound processing ( e.g. . only a few studies investigated masking in the joint time - frequency domain .we present below some typical psychoacoustic results on spectral , temporal , and spectro - temporal masking . for more results and discussion on the origins of masking the interested readeris referred to e.g. . in the following ,we denote by , , and the frequency , duration , and level , respectively , of masker or target .those signal parameters are fixed by the experimenter , i.e. they are known .the frequency shift between masker and target is and the time shift is defined as the onset delay between masker and target .finally , denotes the amount of masking in db . to study spectral masking ,masker and target are presented simultaneously ( since usually , this is equivalent to saying that 0 ) and is varied .there are two ways to vary , either fix and vary or vice versa .similarly , one can fix and vary or vice versa . in short , various types of masking curvescan be obtained depending on the signal parameters .a common spectral masking curve is a masking pattern that represents or as a function of or ( see fig .[ fig : mpfreq ] ) . to measure masking patterns , and fixed and is measured for various . under the assumption that corresponds to a certain ratio of masker - to - target energy at the output of the auditory filter centered at , masking patternsmeasure the responses of the auditory filters centered at the individual .thus , masking patterns can be used as indicator of the _ spectral spread of masking _ of the masker or , in other terms , the spread of excitation of the masker on the bm .this spectral spread can in turn be used to derive a masking threshold , as used for example in audio codecs .see also sec .[ sec : irrel0 ] .[ fig : mpfreq ] shows typical masking patterns measured for narrow - band noise maskers of different levels ( = 45 , 65 , and 85 db spl , as indicated by the different lines ) and frequencies ( = 0.25 , 1 , and 4 khz , as indicated by the different vertical dashed lines ) . in this study , = 200 ms .the masker was a 80-hz - wide band of gaussian noise centered at .the target was also a 80-hz band of noise centered at .the main properties to be observed here are : 1 . for a given masker ( i.e. a pair of and ) , is maximum for = 0 and decreases as increases .this reflects the decay of masker excitation on the bm .2 . masking patternsbroaden with increasing level .this reflects the broadening of auditory filters with increasing level .masking patterns are broader at low than at high frequencies ( see - ) .this reflects the fact that the density of auditory filters is higher at low than at high frequencies .consequently , a masker with a given bandwidth will excite more auditory filters at low frequencies .( in db spl ) is plotted as a function of ( in hz ) on a logarithmic scale .the gray dotted curve indicates the threshold in quiet .the difference between any of the colored curves and the gray curve thus corresponds to , as indicated by the arrow .source : mean data for listeners ja and ao in ( * ? ? ?* experiment 3 , figs .5 - 6 ) . ] by analogy with spectral masking , temporal masking is measured by setting = 0 and varying . _backward _ masking is observed for 0 , that is when the target precedes the masker in time . _ forward _ masking is observed for , that is when the target follows the masker .backward masking is hardly observed for -20 ms and is mainly thought to result from attentional effects .in contrast , forward masking can be observed for + 200 ms .therefore , in the following we focus on forward masking .typical forward masking curves are represented in fig .[ fig : mptime ] .the left panel shows the effect of for = 4 khz ( mean data from ) . in this study ,masker and target were sinusoids ( = 300 ms , = 20 ms ) .the main features to be observed here are ( i ) the temporal decay of forward masking is a linear function of and ( ii ) the rate of this decay strongly depends on . the right panel shows the effect of for = 2 khz and = 60 db spl ( mean data from ) . in this study , the masker was a pulse of uniformly masking noise ( i.e. a broad - band noise producing the same at all frequencies in the range 020 khz , see ) .the target was a sinusoid with = 5 ms .it can be seen that the ( i.e. the difference between the connected symbols and the star ) at a given increases with increasing , at least for 100 ms . finally , a comparison of the two panels in fig .[ fig : mptime ] for = 60 db indicates that , for 50 ms , the 300-ms sinusoidal masker ( empty diamonds left ) produces more masking than the 200-ms broad - band noise masker ( empty squares right ) . despite the difference in , increasing the duration of the noise masker to 300 ms is not expected to account for the difference in of up to 20 db observed here .( in db spl ) is plotted as a function of the temporal gap between masker offset and target onset , i.e. ( in ms ) on a logarithmic scale .left panel : masking curves for various and = 300 ms ( adapted from ) .right panel : masking curves for various and = 60 db ( adapted from ) .stars indicate the target thresholds in quiet . ] only a few studies measured spectro - temporal masking patterns , that is and both systematically varied ( e.g. ) . those studies mostly involved long ( 100 ms ) sinusoidal maskers . in other words ,those studies provide data on the time - frequency spread of masking for long and narrow - band maskers . in the context of time - frequency decompositions , a set of elementary functions , or `` atoms '' , with good localization in the time - frequency domain ( i.e. short and narrow - band )is usually chosen , see sec .[ sec : frameth ] .to best predict masking in the time - frequency decompositions of sounds , it seems intuitive to have data on the time - frequency spread of masking for such elementary atoms , as this will provide a good match between the masking model and the sound decomposition .this has been investigated in .precisely , spectral , forward , and time - frequency masking have been measured using gabor atoms of the form with as masker and target . according to the definition of gabor atoms in ,the masker was defined by , where denotes the imaginary part , with a gaussian window and = 4 khz .the masker level was fixed at = 80 db .the target was defined by with .the set of time - frequency conditions measured in is illustrated in fig .[ fig : tfconds ] .because in this particular case we have , the target term reduces to .the mean masking data are summarized in fig .[ fig : mptf ] . these data , together with those collected by laback et al on the additivity of spectral and temporal masking for the same gabor atoms , constitute a crucial basis for the development of an accurate time - frequency masking model to be used in audio applications like audio coding or audio processing ( see sec . [sec : appli ] ) .the term auditory scene analysis ( asa ) , introduced by bregman , refers to the perceptual organization of auditory events into auditory streams .it is assumed that this perceptual organization constitutes the basis for the remarkable ability of the auditory system to separate sound sources , especially in noisy environments .a demonstration of this ability is the so - called `` cocktail party effect '' , i.e. when one is able to concentrate on and follow a single speaker in a highly competing background ( e.g. many concurring speakers combined with cutlery and glass sounds ) .the term computational auditory scene analysis ( casa ) thus refers to the study of asa by computational means .the casa problem is closely related to the problem of source separation .generally speaking , casa systems can be considered as perceptually motivated sound source separators .the basic work flow of a casa system is to first compute an auditory - based time - frequency transform ( most systems use a gammatone filter bank , but any auditory representation that allows reconstruction can be used , see sec .[ sec : audlet0 ] ) .second , some acoustic features like periodicity , pitch , amplitude and frequency modulations are extracted so as to build the perceptive organization ( i.e. constitute the streams ) .then , stream separation is achieved using so - called `` time - frequency masks '' .these masks are directly applied to the perceptual representation ; they retain the `` target '' regions ( mask = 1 ) and suppress the background ( mask = 0 ) .those masks can be binary or real , see e.g. .the target regions are then re - synthesized by applying the inverse transform to obtain the signal of interest .noteworthy , a perfect reconstruction transform is of importance here .furthermore , the linearity and stability of the transform allow a separation of the audio streams directly in the transform domain .most gammatone filter banks implemented in casa systems are only approximately invertible , though .this is due to the fact that such systems implement gammatone filters in the analysis stage and their time - reversed impulse responses in the synthesis stage .this setting implies that the frequency response of the gammatone filter bank has an all - pass characteristic and features no ripple ( equivalently in the frame context , that the system is tight , see [ ssec : frametheory ] ) . in practice , however , gammatone filter banks usually consider only a limited range of frequencies ( typically in the interval 0.14 khz for speech processing ) and the frequency response features ripples if the filters density is not high enough .if a high density of filters is used , the audio quality of the reconstruction is rather good .still , the quality could be perfect by using frame theory .for instance , one could render the gammatone system tight ( see proposition [ prop : cantight ] ) or use its dual frame ( see sec .[ sec : perfrecfram0 ] ) .the use of binary masks in casa is directly motivated by the phenomenon of auditory masking explained above .however , time - frequency masking is hardly considered in casa systems . as a final remark, an analogy can be established between the ( binary ) masks used in casa and the concept of frame multipliers defined in sec .[ fmult ] .specifically , the masks used in casa systems correspond to the symbol in .this analogy is not considered in most casa studies , though , and offers the possibility for some future research connecting acoustics and frame multipliers .what is an appropriate setting for the mathematical background of audio signal processing ?since real - world signals are usually considered to have finite energy and technically are represented as functions of some variable ( e.g. time ) , it is natural to think about them as elements of the space . roughly speaking, contains all functions with finite energy , i.e. with . for working with sampled signals ,the analogue appropriate space is ( denoting a countable index set ) which consists of the sequences with finite energy , i.e. . both spaces and are hilbert spaces and one may use the rich theory ensured by the availability of an inner product , that serves as a measure of correlation , and is used to define orthogonality , of elements in the hilbert space . in particular , the inner product enables the representation of all functions in in terms of their inner products with a set of reference functions : a standard approach for such representations uses orthonormal bases ( onbs ) , see e.g. .every separable hilbert space has an onb and every element can be written as with uniqueness of the coefficients , .the convenience of this approach is that there is a clear ( and efficient ) way for calculating the coefficients in the representations using the same orthonormal sequence .even more , the energy in the coefficient domain ( i.e. , _ the square of the -norm _ ) is exactly the energy of the element : furthermore , the representation ( [ repronb ] ) is stable - if the coefficients are slightly changed to , one obtains an element close to the original one .however , the use of onbs has several disadvantages .often the construction of orthonormal bases with some given side constraints is difficult or even impossible ( see below ) .small perturbation of the orthonormal basis elements may destroy the orthonormal structure .finally , the uniqueness of the coefficients in ( [ repronb ] ) leads to a lack of exact reconstruction when some of these coefficients are lost or disturbed during transmission . +this naturally leads to the question how the concept of onbs could be generalized to overcome those disadvantages . as an extension of the above - mentioned parseval equality for onbs, one could consider inequalities instead of an equality , i.e. boundedness from above and below ( see def .[ framedef ] ) .this leads to the concept of _ frames _ , which was introduced by duffin and schaeffer in 1952 .it took several decades for scientists to realize the importance and applicability of frames .popularized around the 90s in the wake of wavelet theory , frames have seen increasing interest and extensive investigation by many researchers ever since . frame theory is both a beautiful abstract mathematical theory and a concept applicable in many other disciplines like e.g. engineering , medicine , and psychoacoustics , see sec . [sec : appli ] . via frames, one can avoid the restrictions of onbs while keeping their important properties . frames still allow perfect and stable reconstruction of all the elements of the space , though the representation - formulas in general are not as simple as the ones via an onb ( see sec .[ sec : perfrecfram0 ] ) . compared to orthonormal bases , the frame property itself is much more stable under perturbations ( see , e.g. , ( * ? ? ?15 ) ) . also , in contrast to orthonormal bases , frames allow redundancy which is desirable e.g. in signal transmission , for reconstructing signals when some coefficients are lost , and for noise reduction . via redundant framesone has multiple representations and this allows to choose appropriate coefficients fulfilling particular constraints , e.g. when aiming at sparse representations .furthermore , frames can be easier and faster to construct than onbs .some advantageous side constraints can _ only _ be fulfilled for frames .for example , gabor frames provide convenient and efficient signal processing tools , but good localization in both time and frequency can never be achieved if the gabor frame is an onb or even a riesz basis ( cf .balian - low theorem , see e.g. ( * ? ? ? * theor .4.1.1 ) ) , while redundant gabor frames for this purpose are easily constructed ( for example using the gaussian function ) .see sec .[ sec : tfmask0 ] on how good localization in time and frequency is important in masking experiments .some of the main properties of frames were already obtained in the first paper . for extensive presentation on frame theory, we refer to .+ in this section we collect the basics of frame theory relevant to the topic of the current paper .all the statements presented here are well known .proofs are given just to make the paper self - contained , for convenience of the readers , and to facilitate a better understanding of the mathematical concepts .they are mostly based on . throughout the rest of the section, denotes a separable hilbert space with inner product , - the identity operator on , - a countable index set , and ( resp . ) - a sequence ( resp . ) with elements from .the term _ operator _ is used for a linear mapping .readers not familiar with hilbert space theory can simply assume for the remainder of this section .the frame concept extends naturally the parseval equality permitting inequalities , i.e. , the ratio of the energy in the coefficient domain to the energy of the signal may be bounded from above and below instead of being necessarily one : [ framedef ] a countable sequence is called a _ frame _ for the hilbert space if there exist positive constants and such that the constant ( resp . ) is called a _ lower _ ( resp ._ upper _ ) _ frame bound _ of .a frame is called _ tight _ _ with frame bound _ if is both a lower and an upper frame bound . a tight frame with bound called a _parseval frame_. clearly , every onb is a frame , but not vice - versa .frames can naturally be split into two classes - the frames which still fulfill a basis - property , and the ones that do not : [ defrb ] a frame for which is a schauder basis is called a _ schauder basis _ for if every element can be written as with unique coefficients . ] for is called a _ riesz basis _ for . a frame for which is not a schauder basis for called _ redundant _ ( also called _ overcomplete _ ) .note that riesz bases were introduced by bari in different but equivalent ways .riesz bases also extend onbs , but contrary to frames , riesz bases still have the disadvantages resulting from the basis - property , as they do not allow redundancy . for more on riesz bases ,see e.g. . as an illustration of the concepts of onbs , riesz bases , and redundant frames in a simple setting , consider examples in the euclidean plane , see fig .[ onbrbfr ] .\(a ) onb for ( b ) unique representation of via + : onb ( a , b ) , riesz basis ( c , d ) , frame ( e , f),title="fig:",scaledwidth=89.0% ] + ( c ) riesz basis for ( d ) unique representation of via + : onb ( a , b ) , riesz basis ( c , d ) , frame ( e , f),title="fig:",scaledwidth=89.0% ] + ( e ) frame for ( f ) non - unique representation of via + : onb ( a , b ) , riesz basis ( c , d ) , frame ( e , f),title="fig:",scaledwidth=100.0% ] note that in a finite dimensional hilbert space , considering only finite sequences , frames are precisely the complete sequences ( see , e.g. , ( * ? ? ? * sec . 1.1 ) ) , i.e. , the sequences which span the whole space . however , this is not the case in infinite - dimensional hilbert spaces - every frame is complete , but completeness is not sufficient to establish the frame property . for results focused on frames in finite dimensional spaces ,refer to .as non - trivial examples , let us mention a specific type of frames used often in signal processing applications , namely gabor frames .a gabor system is comprised of atoms of the form with function called the _ ( generating ) _ _ window _ and with time- and frequency - shift , respectively . to allow perfect and stable reconstruction , the gabor system is assumed to have the frame - property andin this case is called a _gabor frame_. note that the analysis operator of a gabor frame corresponds to a _ sampled short - time - fourier transform _( see , e.g. , ) also referred to as _gabor transform_. most commonly , _regular gabor frames _ are used ; these are frames of the form for some positive and satisfying necessarily ( but in general not sufficiently ) . to mention a concrete example - for the gaussian , the respective regular gabor system is a frame for if and only if ( see , e.g. , ( * ? ? ?7.5 ) and references therein ) .other possibilities include using alternative sampling structures , on subgroups or irregular sets . if the window is allowed to change with time ( or frequency ) one obtains the non - stationary gabor transform .there it becomes apparent that frames allow to create adaptive and adapted transforms , while still guaranteeing perfect reconstruction .if not continuous but sampled signals are considered , gabor theory works similarly ._ discrete gabor frames _ can be defined in an analogue way , namely , frames of the form \right)_{l\in\zz , k=0,1,\ldots , m-1} ] the ( discrete ) dirac symbol , with =1 ] of the system represented in fig .[ sfig : nonuniformfb ] are given in the time domain by \, = \ , \downarrow_{d_{k}}\left\ { h_{k}*x\right\}[n]\ ] ] the output signal is \ , = \ , \sum_{k=0}^{k}\left(g_{k}*\uparrow_{d_{k}}\left\{y_{k}\right\ } \right)[n] ] is the _ alias component matrix _ and ] for all and some fixed . in the casewhen , the fb output is _ delayed _ by .using the alias domain representation of the fb , the _ perfect reconstruction condition _ can be expressed as ^{t},\ ] ] for some , as this condition is equivalent to . from this vantage pointthe perfect reconstruction condition can be interpreted as all the alias components ( i.e. from the to -th ) in being uniformly canceled over all by the synthesis filters , while the first component of remains constant over all ( up to a fixed power of ) .the perfect reconstruction condition is of tremendous importance for determining whether an fb , including both analysis and synthesis steps , provides perfect reconstruction .however , given a fixed analysis fb , the alias domain representation may fail to provide straightforward or efficient ways to find suitable synthesis filters that provide perfect reconstruction .it can sometimes be used to determine whether such a system can exist , although the process is far from intuitive .consequently , non - uniform perfect reconstruction fbs are still not completely investigated , and thus frame theory may provide valuable new insights .however , for uniform fbs the perfect reconstruction conditions have been largely treated in the literature .therefore , before we indulge in the frame theory of fbs , we also show how a non - uniform fb can be decomposed into its equivalent uniform fb .such a uniform equivalent of the fb always exists and can be obtained as shown in fig .[ sfig : equniformfb ] and described below . .the terms and in ( b ) correspond to the -transforms of the terms and defined in .,scaledwidth=89.0% ] to construct the equivalent uniform fb to a general fb specified by analysis filters , synthesis filters , and downsampling and upsampling factors , , start by denoting again .we first construct the desired uniform fb , before showing that it is in fact equivalent to the given non - uniform fb . forevery filter in the non - uniform fb , introduce filters , given by specific delayed versions of : = h_k\ast \delta_{ld_k } = h_k \left [ n - l d_k\right ] \quad \text{and } \quad g_k^{(l)}[n ] = g_k\ast \delta_{-ld_k } = g_k \left [ n + l d_k\right],\ ] ] for .it is easily seen that convolution with equals translation by samples by just checking the definition of the convolution operation . consequently , the sub - band components are = y_{k}[n q_{k }- l ] = \downarrow_{d}\{\underbrace{h_{k}*\delta_{ld_{k}}}_{:= h_{k}^{(l ) } } * x\ } [ n],\ ] ] where is the -th sub - band component with respect to the non - uniform fb .thus , by grouping the corresponding sub - bands , we obtain = \sum_{l=0}^{q_{k}-1}\uparrow_{q_{k}}\left\{y_{k}^{(l)}\right\ } [ n+l].\ ] ] in the frequency domain , the filters are given by similar to before , the output of the fb can be written as to obtain the second equality , we have used that .insert eq . into to obtain {\mathbfit}h_k(z)g_{k}(z)\nonumber\\ & = & d^{-1}[x(w^0_{d}z),\ldots , x(w^{d-1}_d z)]{\mathbfit}h(z ) { \mathbfit}g(z ) , \end{aligned}\ ] ] which is exactly the output of the non - uniform fb specified by the s , s and s , see .therefore , we see that an equivalent uniform fb for every non - uniform fb is obtained by decomposing each -th channel of the non - uniform system into channels .the uniform system then features channels in total with the downsampling factor in all channels. we will now describe in detail the connection between non - uniform fbs and frame theory .the main difference to previous work in this direction , cf . , is that we do not restrict to the case of uniform fbs .the results in this section are not new , but this presentation is their first appearance in the context of non - uniform fbs . besides using the equivalent uniform fb representation , see fig .[ sfig : equniformfb ] , we transfer results previously obtained for _ generalized shift - invariant systems _ and nonstationary gabor systems to the non - uniform fb setting .for that purpose , we consider frames over the hilbert space of finite energy sequences .moreover , we consider only fbs with a finite number of channels , a setup naturally satisfied in every real- world application .the central observation linking fbs to frames is that the convolution can be expressed as an inner product : = \downarrow_{d_{k}}\left\ { h_{k}*x\right\}[n ] = \langle x , \overline{h_k[nd_k -\cdot ] } \rangle\ ] ] where the bar denotes the complex conjugate .hence , the sub - band components with respect to the filters and downsampling factors equal the frame coefficients of the system }\right)_{k , n} ] and the operator denotes modulation , i.e. .if satisfies at least the upper frame inequality in eq ., then the frame operators and are related by the matrix fourier transform : where denotes the discrete - time fourier transform .since the matrix fourier transform is a unitary operation , the study of the frame properties of reduces to the study of the operator . in the context of fbs ,the frame operator can be expressed as the action of an fb with analysis filters s , downsampling and upsampling factors s , and synthesis filters } ] .inserting this into the alias domain representation of the fb yields { \mathbfit}h(z ) \left [ \begin{array}{c } \overline{h_0(1/\overline{z})}\\ \vdots \\\overline{h_k(1/\overline{z } ) } \end{array}\right]\ ] ] or , restricted to the fourier domain \mathcal h(\xi),\ ] ] with ^t : = \frac{1}{d}{\mathbfit}h(e^{2\pi i\xi})\left[\overline{h_0(e^{2\pi i\xi})},\ldots,\overline{h_k(e^{2\pi i\xi})}\right]^t,\ ] ] for . here , we used for all .we call the _ frequency response _ and , the _ alias components _ of the fb . another way to derive eq. is by using the the walnut representation of the frame operator for the nonstationary gabor frame , first introduced in for the continuous case setting .[ pro : walnut ] let , with being ( essentially ) bounded and .then the frame operator admits the walnut representation for almost every and all . by the definition of the frame operator ,, we have note that e^{-2\pi i\xi nd_k}.\ ] ] to get the result by applying poisson s summation formula , see e.g. .the sums in can be reordered to obtain where . inserting and comparing the definition of in, we can see that for almost every and all .hence , we recover the representation of the frame operator as per , as expected . what makes proposition [ pro : walnut ] so interesting , is that it facilitates the derivation of some important sufficient frame conditions .the first is a generalization of the theory of painless non - orthogonal expansions by daubechies et al . , see also for a direct proof .let , with and . assume for all , there is with and for almost every .then is a frame if and only if there are such that moreover , a dual frame for is given by , where first , note that the existence of the upper bound is equivalent to , for all .it is easy to see that under the assumptions given , eq .equals hence , is invertible if and only if is bounded above and below , proving the first part .moreover , is given by pointwise multiplication with and therefore , the elements of the canonical dual frame for , defined in eq ., are given by in other words , recalling , if the filters are strictly band - limited , the downsampling factors are small and almost everywhere , then we obtain a perfect reconstruction system with synthesis filters defined by the second , more general and more interesting condition can be likened to a diagonal dominance result , i.e. if the main term is _ stronger _ than the sum of the magnitude of alias components , , then the fb analysis provided by the filters and downsampling factors is invertible .[ pro : diagdom ] let , with and . if there are with for almost every , then forms a frame with frame bounds .note that impliest for all .therefore , proposition [ pro : walnut ] applies for any fb that satisfies .the proof of proposition [ pro : diagdom ] is somewhat lengthy and we omit it here .it is very similar to the proof of the analogous conditions for gabor and wavelet frames that can be found in for the continuous case .it can also be seen as a corollary of ( * ? ? ?* theorem 3.4 ) , covering a more general setting .a few things should be noted regarding proposition [ pro : diagdom ] .\(a ) as mentioned before , this is a sort of diagonal dominance result .while the sum corresponds to , we have since , in fact , the finite number of channels guarantees the existence of if and only if , for all , the result implies that the fb analysis provided by s and s is invertible , whenever \(b ) no explicit dual frame is provided by proposition [ pro : diagdom ] .so , while we can determine invertibility quite easily , provided the fourier transforms of the filters can be computed , the actual inversion process is still up in the air .in fact , it is unclear whether there are synthesis filters such that the s and s form a perfect reconstruction system with down-/upsampling factors .we consider here two possible means of recovering the original signal from the sub - band components .first , the equivalent unform fb , comprised of the filters , for and all , with downsampling factor can be constructed . since the non - uniform fb forms a frame ,so does its uniform equivalent and hence the existence of a dual fb , for and all , is guaranteed .note that the are not necessarily delayed versions of , as it is the case for .then , the structure of the alias domain representation in with } ] , respectively .if furthermore , , then convergence speed can be further increased by preconditioning , considering instead the operator defined by more specifically , the cg algorithm is employed to solve the system for , given the coefficients . recall the analysis / synthesis operators ( see sec .[ froperators ] ) , associated to a frame , which are equivalent to the analysis / synthesis stages of the fb .the preconditioned case can be implemented most efficiently , by precomputing an approximate dual fb , defined by and solving instead }\}_{k , n},\ ] ] for , given the coefficients .algorithm [ alg : nsgsyniter ] shows a pseudo - code implementation of such a preconditioned cg scheme , available in the ltfat toolbox as the routine ` ifilterbankiter ` .initialize , concept of auditory filters lends itself nicely to the implementation as a fb . as motivated in sec .[ sec : intro0 ] , it can be expected that many audio signal processing applications greatly benefit from an invertible fb representation adapted to the auditory time - frequency resolution . despite the auditory system showing significant nonlinear behavior ,the results obtained through a linear representation are desirable for being much more predictable than when accounting for nonlinear effects .we call such a system _ perceptually - motivated fb _ , to distinguish from _ auditory fbs _ that attempt to mimic the nonlinearities in the auditory system .note that , as mentioned in section [ ssec : audfilters ] , the first step in many auditory fbs is the computation of a perceptually - motivated fb , see e.g. .the _ audlet fbs _ we present here are a family of perceptually - motivated fbs that satisfy a perfect reconstruction property , offer flexible redundancy and enable efficient implementation .they were introduced in and an implementation is available in the ltfat toolbox .the audlet fb has a general non - uniform structure as presented in fig .[ sfig : nonuniformfb ] with analysis filters , synthesis filters , and downsampling and upsampling factors .considering only real - valued signals allows us to deal with symmetric and process only the positive - frequency range .therefore let denote the number of filters in the frequency range \cap [ 0,f_{s}/2[$ ] , where to and is the nyquist frequency , i.e. half the sampling frequency .if , this range includes an additional filter at the zero frequency . furthermore , another filter is always positioned at the nyquist frequency to ensure that the full frequency range is covered .thus , all fbs below feature filters in total and their redundancy is given by , since coefficients in the to -th subbands are complex - valued ..parameters of the perceptually - motivated audlet fb [ cols="^,^,^",options="header " , ] [ tab:1 ] the audlet filters s , are constructed in the frequency domain by where is a prototype filter shape with bandwidth and center frequency .here , the shape factor controls the effective bandwidth of and determines its center frequency .the factor ensures that all filters ( i.e. for all ) have the same energy . to obtain filtersequidistantly spaced on a perceptual frequency scale , the sets and are calculated using the corresponding and formulas , see tab . 1 for more information on the audlet parameters and their relations .since we emphasize inversion , the default analysis parameters are chosen such that the filters and downsampling factors form a frame .as an example , the audlet ( a ) and gammatone ( b ) analyses of a speech signal are represented in fig .[ fig : erbvsgfb_img ] using aud = erb and = 6 filters per erb .the filter prototype for the audlet was a hann window .it can be seen that the two signal representations are very similar over the whole time - frequency plane .since the gammatone filter is an acknowledged auditory filter model , this indicates that the time - frequency resolution of the audlet approximates well the auditory resolution . as discussed in sec .[ ssec : masking ] not all components of a sound perceived .this effect can be described by masking models and naturally leads to the following question : given a time - frequency representation or any representation linked to audio , how can we apply that knowledge to only include audible coefficients in the synthesis ? in an attempt to answer this question , efforts were made to combine frame theory and masking models into a concept called the _ irrelevance filter_. this concept is somehow linked to the currently very prominent sparsity and compressed sensing approach , see e.g. for an overview . to reduce the amount of non - zero coefficients , the irrelevance filter uses a perceptual measure of sparsity , hence _perceptual sparsity_. perceptual and compressed sparsity can certainly be combined , see e.g. . similar to the methods used in compressed sensing , a redundant representation offers an advantage for perceptual sparsity , as well , as the same signal can be reconstructed from several sets of coefficients .the concept of the irrelevance filter was first introduced in and fully developed in .it consists in removing the inaudible atoms in a gabor transform while causing no audible difference to the original sound after re - synthesis .precisely , an adaptive threshold function is calculated for each spectrum ( i.e. at each time slice ) of the gabor transform using a simple model of spectral masking ( see sec .[ sec : simmask0 ] ) , resulting in the so - called irrelevance threshold .then , the amplitudes of all atoms falling below the irrelevance threshold are set to zero and the inverse transform is applied to the set of modified gabor coefficients .this corresponds to an adaptive _ gabor frame multiplier _ with coefficients in .the application of the irrelevance filter to a musical signal sampled at 16 khz is shown in fig .[ fig : irrelfilter ] . a matlab implementation of the algorithm proposed in was used .all gabor transform and filter parameters were identical to those mentioned in .noteworthy , the offset parameter was set to -2.59 db . in this particular example , about 48% components were removed without causing any audible difference to the original sound after re - synthesis ( as judged by informal listening by the authors ) . a formal listening test performed in with 36 normal - hearing listeners and various musical and speech signals indicated that , on average , 36% coefficients can be removed without causing any audible artifact in the re - synthesis .+ the irrelevance filter as depicted here has shown very promising results but the approach could be improved . specifically , the main limitations of the algorithm are the fixed resolution in the gabor transform and the use of a simple spectral masking model to predict masking in the time - frequency domain . combining an invertible perceptually - motivated transform like the audlet fb ( sec .[ sec : audlet0 ] ) with a model of time - frequency masking ( sec . [ sec : tfmask0 ] ) is expected to improve performance of the filter .this is work in progress .potential applications of perceptual sparsity include , for instance : 1 .sound / data compression : for applications where perception is relevant , there is no need to encode perceptually irrelevant information. data that can not be heard should be simply omitted .a similar algorithm is for example used in the mp3 codec .if `` over - masking '' is used , i.e. the threshold is moved beyond the level of relevance , a higher compression rate can be reached .2 . sound design : for the visualization of sounds the perceptually irrelevant part can be disregarded .this is for example used for car sound design .in this chapter , we have discussed some important concepts from hearing research and perceptual audio signal processing , such as auditory masking and auditory filter banks .natural and important considerations served as a strong indicator that frame theory provides a solid foundation for the design of robust representations for perceptual signal analysis and processing .this connection was further reinforced by exposing the similarity between some concepts arising naturally in frame theory and signal processing , e.g. between frame multipliers and time - variant filters .finally , we have shown how frame theory can be used to analyze and implement invertible filter banks , in a quite general setting where previous synthesis methods might fail or be highly inefficient .the codes for matlab / octave to reproduce the results presented in secs .[ sec : frameth ] and [ sec : appli ] in this chapter are available for download on the companion webpage https://www.kfs.oeaw.ac.at/frames_for_psychoacoustics .it is likely that readers of this contribution who are researchers in psychoacoustics or audio signal processing have already used frames without being aware of the fact .we hope that such readers will , to some extent , grasp the basic principles of the rich mathematical background provided by frame theory and its importance to fundamental issues of signal analysis and processing . with that knowledge , we believe, they will be able to better understand the signal analysis tools they use and might even be able to design new techniques that further elevate their research . on the other hand , researchers in applied mathematics or signal processing have been supplied with basic knowledge of some central psychoacoustics concepts .we hope that our short excursion piqued their interest and will serve as a starting point for applying their knowledge in the rich and various fields of psychoacoustics or perceptual signal processing .the authors acknowledge support from the austrian science fund ( fwf ) start - project flame ( frames and linear operators for acoustical modeling and parameter estimation ; y 551-n13 ) and the french - austrian anr - fwf project potion ( `` perceptual optimization of time - frequency representations and audio coding ; i 1362-n30 '' ) .they thank b. laback for discussions and w. kreuzer for the help with a graphics software .m. bzat , v. roussarie , t. voinier , r. kronland - martinet , and s. ystad .car door closure sounds : characterization of perceptual properties through analysis - synthesis approach . in _ proceedings of the 19th international congress on acoustics ( ica ) , madrid , spain_ , september 2007 .g. chardon , t. necciari , and p. balazs .perceptual matching pursuit with gabor dictionaries and time - frequency masking . in _ proceedings of the 39th international conference on acoustics , speech , and signal processing ( icassp 2014 ) _ , 2014 .h. g. feichtinger and k. nowak .first survey of gabor multipliers . in h.g. feichtinger and t. strohmer , editors , _ advances in gabor analysis _ , appl ., pages 99128 .birkhuser , 2003 .g. matz and f. hlawatsch .ime - frequency transfer function calculus ( symbolic calculus ) of linear time - varying systems ( linear operators ) based on a generalized underspread theory ., 39(8):40414070 , 1998 .t. necciari , p. balazs , n. holighaus , and p. sndergaard .he erblet transform : an auditory - based time - frequency representation with perfect reconstruction . in _ proceedings of the 38th international conference on acoustics , speech , and signal processing ( icassp 2013 ) _ ,pages 498502 , 2013 .t. necciari , n. holighaus , p. balazs , z. pra , and p. majdak .frame - theoretic recipe for the construction of gammatone and perceptually motivated filter banks with perfect reconstruction .http://arxiv.org/abs/1601.06652 .r. d. patterson , k. robinson , j. holdsworth , d. mckeown , c. zhang , and m. h. allerhand .complex sounds and auditory images . in _ auditoryphysiology and perception , proceedings of the 9th international symposium on hearing _ , pages 429446 , oxford , uk , 1992 .pergamond .n. perraudin , n. holighaus , p. sndergaard , and p. balazs .abor dual windows using convex optimization . in _ proceeedings of the 10th international conference on sampling theory and applications ( sampta 2013 ) _ ,2013 .z. pra , p. sndergaard , n. holighaus , c. wiesmeyr , and p. balazs . .in m. aramaki , o. derrien , r. kronland - martinet , and s. ystad , editors , _ sound , music , and motion _ , lecture notes in computer science , pages 419442 .springer international publishing , 2014 .d. t. stoeva and p. balazs .riesz bases multipliers . in m.cepedello boiso , h. hedenmalm , m. a. kaashoek , a. montes - rodrguez , and s. treil , editors , _ concrete operators , spectral theory , operators in harmonic analysis and approximation _ , volume 236 of _ operator theory : advances and applications _ , pages 475482 .birkhuser , springer basel , 2014 .t. strohmer .umerical algorithms for discrete gabor expansions . in h.g. feichtinger and t. strohmer , editors , _ gabor analysis and algorithms : theory and applications _ , appl .numer . harmon ., pages 267294 .birkhuser boston , boston , 1998 .
this review chapter aims to strengthen the link between frame theory and signal processing tasks in psychoacoustics . on the one side , the basic concepts of frame theory are presented and some proofs are provided to explain those concepts in some detail . the goal is to reveal to hearing scientists how this mathematical theory could be relevant for their research . in particular , we focus on frame theory in a filter bank approach , which is probably the most relevant view - point for audio signal processing . on the other side , basic psychoacoustic concepts are presented to stimulate mathematicians to apply their knowledge in this field .
some kinds of bacterial colonies present interesting structures during their growth . depending on the bacterial species and the culture conditions, colonies can exhibit a great diversity of forms .in general , the complexity of the growth pattern increases as the environmental conditions become less favorable .bacteria respond to adverse growth conditions by developing sophisticated strategies and higher micro - level organization in order to cooperate more efficiently .examples of these strategies are : the differentiation into longer - motile bacteria , the production of extracellular wetting fluid , the secretion of surfactants which change the surface tension or the chemotactic response to chemical agents produced by bacteria .the experiments are usually made in a petri dish , which contains a solution of nutrient and agar .a drop of bacterial solution is then inoculated in the center of the dish .the growth conditions are controlled by the initial concentration of the medium components .the agar concentration determines the consistency of the medium , which becomes harder as the amount of agar increases , and the nutrient concentration controls the bacterial reproduction . depending on these two factors, the colony grows at a higher or lower rate , developing different kinds of patterns . in particular , colonies of the bacterium _ bacillus subtilis _og-01 present a rich variety of structures .[ fig1 ] shows the morphological diagram obtained by matsushita and co - workers .they classified the colony patterns into five types , from a to e , whose main features can be summarized as follows .if the medium is very hard , i.e. , with a high concentration of agar , bacteria can hardly move and the colony essentially grows due to the consumption of nutrient and subsequent reproduction .if the level of nutrient is also low ( region ) , the growth is controlled by the diffusion of the nutrient up to the bacteria placed at the interface .the colony develops a ramified structure very similar to the patterns obtained with the diffusion - limited aggregation model ( dla) .it takes approximately one month to cover the dish .if the initial agar concentration remains high and the nutrient concentration is increased , the growth is faster than in region .the branches grow thicker until they fuse into a dense disk with rough interface ( region ) , similar to the patterns obtained with an eden model .this structure needs 5 - 7 days to cover the disk .when the level of agar is decreased , which produces a medium a little softer than in region b , and the level of nutrient remains high , the colony forms concentric rings ( region ) .this region is characterized by periodic dynamics : for 2 - 3 hours the colony expands while the bacteria move actively ( `` migration phase '' ) and then they almost stop for 2 - 3 hours ( `` consolidation phase '' ) , during which the colony does not grow appreciably and the bacterial density increases due to reproduction .the crossover between the two phases is sharp .the periodic cycles of subsequent migration and consolidation phases create the pattern of concentric rings .accurate measurements show that in the growth phase there is a high concentration of longer and more motile bacteria , as a consequence of a differentiation process .when a high level of nutrient is maintained , and the agar concentration is decreased further , the colony spreads over the agar plate , and after less than 8 hours homogeneous disk of low bacterial density is formed covering all the dish ( region ) . in this thin surface , bacteria are always short and can move easily by swimming . by decreasing the nutrient concentrations for a semi - solid medium ,the colony develops a densely branched pattern ( region ) similar to the dense branching morphology ( dbm ) found in other systems .the ratio of the width of the branches to the gap between them is constant over the whole colony .the colony grows quite fast , showing its main activity at the tips of the fingers , and covering the dish in less than 24 hours .the dynamics is related to both the consumption of nutrients and the bacterial motility . in general , when environmental conditions are adverse ( low nutrient or hard surface ) , a higher level of cooperation is observed .the existence of a cooperative behavior seems to be determinant in the formation of the rings patterns of region .the same kind of concentric rings has been found in experiments with other bacterial species . in the case of the bacterium _ proteus mirabilis _ , the migration phases clearly involve the movement of differentiated swarmer bacteria ( elongated and hyperflagellated) .similar ring patterns have also been observed in other non - living systems , like the liesegang rings produced by precipitation in the wake of a moving reaction front , or some experiments of interfacial electrodeposition . in the case of the liesegang patterns , it is well known that the distance between rings increases as , whereas in bacterial and electrodeposition it is constant . #1#20.50 several models have been proposed to explain the variety of patterns exhibited by _bacillus subtilis _ , as shown in fig .dla - like patterns ( region in fig .[ fig1 ] ) have been interpreted as growth controlled by the diffusion of nutrients in the context of the dla model .ben - jacob and co - workers proposed a communicating walkers model to describe some of the morphologies .this model reproduces the crossover between regions a and b by coupling random walkers to fields representing the nutrients .dbm - like patterns are also obtained by introducing a chemotactic agent .other kinds of models are based on reaction - diffusion equations for bacterial density .the fisher equation can be used for reproducing the homogeneous circular morphology ( region ) .further developments were achieved by introducing new elements to the fisher model , such as a field for nutrient and nonlinear diffusion coefficients .depending on the new elements introduced , these models reproduce some of the patterns of fig .however , the ring patterns ( region ) have so far eluded a satisfactory modelization . although the model suggested in ref . can generate concentric ring patterns , they are rather different from those observed in experiments .in fact , dynamical cycles of consolidation and growth phases are not found .finally , we must mention a model proposed by esipov _ for the study of _ proteus mirabilis _ colonies , which introduces a life - time for the differentiated swarmer bacteria .this model reproduces concentric ring patterns but does not explain why no periodicity is observed in other regions of the morphological diagram . in this paper , we propose a model consisting of two coupled diffusion - reaction equations for bacteria and nutrient concentrations , where the bacterial diffusion coefficient can adopt two different expressions , corresponding to two possible mechanisms of motion .the first is the usual random swimming performed by bacteria in liquid medium .the second is developed by bacteria in response to adverse growth conditions , and depends on their concentration .bacterial response is modeled as a global variable that can present hysteresis .our model reproduces the five morphologies observed in the experiments , including the ring patterns .we consider a two - dimensional system containing bacteria and nutrients . both diffuse , while bacteria proliferate by feeding on nutrient .let us denote by the density of bacteria at time and spatial position , and by the concentration of nutrient .then , and are in general governed by the following equations : the function denotes the consumption term of nutrient by bacteria , and can be described by michaelis - menten kinetics where is the intrinsic consumption rate . for small ,the consumption rate is approximately linear in and it saturates at the value as increases . and are the diffusion coefficients of bacteria and nutrient respectively .we assume that is constant , but can depend on nutrient and bacterium concentrations . as explained above, experiments show that in adverse conditions , bacteria can adapt themselves in order to improve their motility . in a soft medium and high nutrient concentration ( region of fig .[ fig1 ] ) , short bacteria can swim randomly without difficulty , but in an adverse environment ( regions and ) they need to develop mechanisms to become more motile . for intermediate conditions of semi - solid medium and sufficient nutrient ,there are periods of fast growth ( migration phase ) and slow growth ( consolidation phase ) that lead to the concentric ring patterns .the analysis of periodic rings suggests a dynamical scheme with hysteresis that can be outlined in the following way : during the consolidation phase , the population of longer - motile bacteria increases in order to overcome the opposition to the movement .when this population exceeds a certain value , enhanced movement becomes possible and a migration phase begins .then , however , a progressive decrease in long - bacterial population ensues , until it reaches a minimum at which the `` enhanced - movement mechanism '' does not work . then a new consolidation phase begins . within this scheme , region corresponds to a case where the maximum value is never reached ( and therefore bacteria always move by usual diffusion ) , whereas in regions and long - bacterial population does not fall below the minimum ( and therefore always moves by the enhanced - movement mechanism ) .all these ideas can be introduced in our model by means of two basic points : \(a ) the diffusion coefficient can take two different expressions depending on the long - bacterial population .\(b ) the net production of long bacteria depends on the environmental conditions and also on the colony phase of growth .according to these ideas , we propose the following function for : where depends on the concentration of agar , which is lower for harder medium . to take into account the inhomogeneities of the medium, we introduce a quenched disorder in , which is written as , being a random term defined on a square lattice . from now on , will be referred to as the diffusion parameter .the first term of eq .( [ dif ] ) describes the usual diffusion of bacteria in a liquid medium .the second describes the cooperative enhanced - movement mechanism promoted by long bacteria .this second mechanism can be modeled by a diffusion coefficient that depends on the bacterial concentration .we multiply both terms by nutrient concentration to take into account the fact that bacteria are inactive in the region where nutrient has been depleted .this dependence on would not have been necessary if we had considered a `` death '' term in the equation for .the coefficients and can adopt two different values ( one of them zero ) depending on the concentration of the long bacteria , as will be specifed below. equations ( [ mod1 ] ) with eqs.([growth])-([dif ] ) can be written in a simpler form as with at this point , we need to specify how to choose and depending on the population of long bacteria . in order to do this ,we introduce a global phenomenological quantity that measures the amount of long bacteria .the evolution of this quantity should have a `` creation term '' that represents the transformation of short bacteria into long ones , and an `` annihilation term '' that represents the opposite transformation ( septation ) .it seems reasonable to assume that the creation term is directly dependent on the mean bacterial concentration , and inversely dependent on the level of nutrient ( ) and on the diffusion parameter ( adverse conditions , _i.e. _ and small , means a faster differentiation process ) .with regard to the annihilation term , it can adopt two possible values depending on the growth phase . the simplest equation that includes all these considerationscan be written as where is a constant and can have two different values ( or , ) .the quantity , defined as , is a measure of the mean concentration of bacteria inside the colony .we introduce the hysteresis previously pointed out by assuming that there are two limit values and for which : with a suitable choice of parameters , and , and by changing only and , we can obtain colonies that always move with one of the two types of diffusion , or colonies that periodically change from one type to the other .this will occur if , which will give rise to the ring patterns . although the bacterial response has been expressed in terms of the population of long - bacteria , other possible kinds of responses admit identical modelization . in this sense ,our model is quite general .we have numerically integrated eqs.([model ] ) with ( [ population])-([cond ] ) in a square lattice of lateral size using a 4-th order runge - kutta s method with mesh - size and time step .the system was initially prepared by assigning to each point a nutrient concentration , being a uniform random number in the interval , and a bacterial concentration , except in a small central square where .the random term of the diffusion , , takes a different and uncorrelated value in each box of side .the random values are assumed to be uniformly distributed in the interval .the box size and the intensity do not essentially affect the results . in all our simulations , we used the parameters , , , , , , , and .we reproduce the different morphologies observed in experiments by changing the values of the initial concentration of nutrients and the softness of the media , related to . # 1#20.50 in figs .[ fig2]-[fig3 ] we present the results obtained for and different values of . by increasing reproduce the crossover between regions and of fig .[ fig1 ] , from dla - like patterns ( fig .[ fig2](a ) ) to a dense rough structure similar to that found with an eden model ( fig .[ fig2](b ) ) .all of them correspond to a situation in which , due to the small value of , the creation term of eq .( [ population ] ) is greater than , except at the very beginning .the response can never decrease below the value , and therefore the colony will always grow with the enhanced - movement mechanism . in spite of this cooperative mechanism , and because of the hardness of the medium , the effective diffusion is still small .the growth is mostly due to reproduction by feeding on the nutrient . for low level of nutrient , _i.e. _ small , the colony growth is limited by the diffusion of these nutrients .it develops branches , which are thicker as increases .the prototype model that reproduces this kind of structure is the diffusion - limited aggregation ( dla ) , which is known to form a fractal pattern with a fractal dimension of .experiments performed by matsushita et al . in region of fig.1 also show a fractal growth with dimension .we have analyzed the fractal nature of the patterns obtained with our model , for and several values of the initial nutrient , from ( dla - like ) to ( rough structure ) .we have calculated their fractal dimensions by using the box - counting method . in fig .[ fig3 ] we show , in a log - log plot , the number of boxes of size that contains any part of the pattern , versus the size of the boxes .the slopes of the lines represent the fractal dimensions .we observe that the cases that correspond to low nutrient have a fractal dimension of about , showing good agreement with experiments . on the other hand, there is an abrupt change between these patterns and those that are not fractal ( ) .these last cases can be analyzed in terms of the roughness of their interfaces . #1#20.50 it is well known that eden structures are not themselves fractal , but their surfaces exhibit a self - affine scaling .this implies that , for a long enough time , the width of the rough interface scales with an exponent as a function of the length of the interface ( ) .the roughness exponent for the eden model is .vicsek et al . analyzed experimental data corresponding to the region of fig .they concluded that these colony surfaces are self - affine with a roughness exponent .we have checked this point for our dense rough pattern ( fig .[ fig2](b ) ) by measuring the width for intervals of interface of length .the results , as a function of , are presented in fig .[ fig4 ] . in order to avoid additional effects derived from the radial growth of the colony, we have also performed a complementary simulation for the same parameters as fig .[ fig2](b ) but with a strip geometry . to do this , we have used a rectangular lattice of horizontal lateral size , with periodic boundary conditions in direction , and taken as an initial condition for bacteria a horizontal line of length .the results for this case are also plotted in fig .[ fig4 ] . for both circular and strip cases, we observe analogous behavior to that observed in experiments .our results show a linear region with a slope compatible with the experimental value .# 1#20.50 with the aim of reproducing other morphologies of fig .[ fig1 ] , we now keep the initial nutrient fixed at the value and increase the diffusion parameter .results are shown in fig .[ fig5](a)-(b ) .we observe a crossover from the dla - like structure ( fig .[ fig2](a ) ) to a dense branching morphology analogous to that represented in region of fig .[ fig1 ] . in fig .[ fig5](c)-(d ) , we present two snapshots obtained for a fixed value of the initial nutrient .they show how different kinds of patterns are obtained when is increased : from the dense rough structure ( fig .[ fig2](b ) ) , to concentric rings ( fig . [ fig5](c ) ) and homogeneous disk ( fig .[ fig5](d ) ) .they correspond to the regions and respectively .homogeneous disks are obtained when and are so high that the creation term of eq .( [ population ] ) is always smaller than .this means that the value , above which the enhanced - movement mechanism begins , is never reached , and bacteria move with the usual diffusion coefficient . #1#20.50 ring patterns correspond to a narrow region of parameters and for which the creation term of eq .( [ population ] ) takes a value between and . as explained in section ii, this leads to dynamics in which bacteria move alternatively by usual diffusion ( consolidation phase ) or by the enhanced - movement mechanism ( migration phase ) .the two phases are clearly manifested in fig .[ fig6](a ) , where we represent the radius of the colony as a function of time .the pattern of concentric rings is a consequence of this dynamic behavior . in fig.[fig6](b ) we plot the radial density profile , circularly - averaged , corresponding to the ring pattern shown in fig .[ fig5](c ) .the maxima are formed in the positions where a consolidation phase began . to illustrate this point ,we have pointed out in fig .[ fig6 ] the positions corresponding to the colony radius at the beginning of each consolidation phase . # 1#20.50 numerically , our model also reproduces the experimentally observed robustness of the growth - plus - consolidation period , which is barely dependent on changes in either nutrient or agar concentrations over a wide range . for high enough ,the value of the global quantity approaches . in this limit , as can be derived from eq .( [ population ] ) the period is given by which does not depend on . moreover , as a function of , the period also maintains a rather constant value within a certain range ( determined by parameters , and ) to increase sharply in the boundaries of the ring patterns region ( , ) . for equal period ,the width of the rings increases with .we have proposed a reaction - diffusion model for the study of bacterial colony growth on agar plates , which consists of two coupled equations for nutrient and bacterial concentrations .the most important feature , which introduces differences from previous models , is the fact that here we consider two mechanisms for the bacterial movement : the random swimming in a liquid medium , and a cooperative enhanced movement developed by bacteria when the growth conditions are adverse .the two mechanisms are introduced in our model by means of a diffusion term with two different expressions which depend on the bacterial response to the environmental conditions .this response is modeled as a global variable that presents hysteresis depending on the conditions of the medium .the inhomogeneities of the agar plate have been taken into account as a quenched disorder in the diffusion parameter .we have shown that , simply by changing the parameters related to the hardness of the medium and the initial nutrient , our model reproduces all the patterns obtained experimentally with the bacterium _ bacillus subtilis _ : dla - like , dense - rough disk , dbm - like , ring patterns and homogeneous disk .we have calculated the fractal dimension of the dla - like structures and the roughness exponent of the rough disk surface , obtaining results in good agreement with experiments .the ring patterns have been obtained for intermediate values of agar and high nutrient . in this region , the bacterial response presents hysteresis and the two mechanisms of motion work alternatively , leading to cycles of migration and consolidation phases .the duration of these cycles is roughly constant for different values of nutrient and agar concentration over a wide range .this periodical dynamics generates patterns of concentric rings . in summary ,the model proposed satisfactorily reproduces the whole experimental morphological diagram .it represents a first attempt at describing the response of bacteria to adverse growth conditions and , in certain conditions , their ability to improve their motility .further refinements could be made .the bacterial response , here described as a global variable , could be considered in a more realistic way by introducing a coupling term in a local version of eq .( [ population ] ) for a field .however , preliminar studies with such a model shows the same essential features previously described .we thank i. rfols and j.m .sancho for helpful discussions .this research was supported by direccin general de investigacin cientfica y tcnica ( spain ) ( pb96 - 0241 - 02 ) , by comissionat per universitats i recerca de la generalitat de catalunya ( sgr97 - 439 ) and by universitat politcnica de catalunya ( pr-9608 ) .we also acknowledge computing support from fundaci catalana per a la recerca and centre catal de computaci i comunicacions .f. d. williams and r. h. schwarzhoff , ann .microbiol . * 32 * , 101 ( 1978 ) ; c. allison and c. hughes , sci .progress * 75 * , 403 ( 1991 ) ; r. belas , d. erskine and d. flaherty , j. bacteriol . * 173 * , 6279 ( 1991 ) .
a diffusion - reaction model for the growth of bacterial colonies is presented . the often observed cooperative behavior developed by bacteria which increases their motility in adverse growth conditions is here introduced as a nonlinear diffusion term . the presence of this mechanism depends on a response which can present hysteresis . by changing only the concentrations of agar and initial nutrient , numerical integration of the proposed model reproduces the different patterns shown by _ bacillus subtilis _ og-01 .
large - scale comprehensive protein - protein interaction data , which have become available recently , open the possibility of deriving new information about proteins from their associations in the interaction graph . in the following ,we discuss and compare several probabilistic methods for predicting protein functions from the functions of neighboring proteins in the interaction graph .in particular , we compare two recently published methods that are based on markov random fields with a prediction based on a machine - learning appproach using maximum - likelihood parameter estimation .it turns out that all three approaches can be considered different versions of each other using different approximations . the main difference between the markov random field ( mrf ) andthe machine - learning methods is that the former apprach takes a global look at the network , while the latter considers each networks node as an independent training example .however , in the mean - field approximation required to make the mrf approach numerically tractable , it is reduced to considering each node independently . the local enrichment - method considered in then be interpreted as another approximation which enables us to make predictions directly from observer frequencies , bypassing the numerical minimization step required in the more general machine - learning approach .we also extend these methods by considering a non - linear generalization for the probability distribution in the machine - learning approach , and by taking larger neighborhoods in the network into account .finally , we compare the performance of these methods to a standard supper vector machine .we consider a network specified by a graph whose nodes are proteins and whose undirected vertices indicate interactions between the proteins .each node is assigned one of a set of protein functions . in a machine - learning approach to prediction, this assignment follows a simple probability function depending on the protein functions in the network neighborhood of each node and parametrized by a small set of parameters .the learning problem is to estimate these parameters from a given sample of assignments .the prediction can then be performed by evaluating the probability distribution using these parameters .assume we only consider a single protein function at a time .node assignments can then be chosen binary , , with indicating that a node has the function under consideration . in the simplest case ,the probability that a node has assignment depends only its immediate neighbors , and since all vertices of the graph are equal , it can only depend on the number of neighbors , and the number of active neighbors .borrowing from statistical mechanics , we write the probability using a potential where the partition sum is a normalizing factor .this equation basically expresses that the log - probabilities of are proportional to the potential . in a lowest - order approximation, we can choose a linear function for the potential : later , we will extend this approach to more general functions .the parameters can be estimated from a set of training samples by maximum - likelihood estimation . in this approach , they are chosen to maximize the joint probability of the training data , or equivalently , to minimize its negative logarithm \quad.\ ] ] taking the partial derivative w.r.t . to a parametergives the equation the first term in the bracket is the expectation value of in the neighborhood under the probability distributions parametrized by : at the extremum , the derivative vanishes and we have the simple relation thus , in the maximum - likelihood model , the parameters are adjusted so that the average expectation values of the derivatives of the potential are equal to the averages observed in the training data .using eq .[ eq:1 ] , this gives the set of three equations . where the expectation value of in the environment and in the model parametrized by is given by only in the simplest case , , this equation can be solved analytically , leading to the relation : in the general case , we solve these equations numerically using a conjugate - gradient method by explicitly minimizing the joint probability . an alternative approach to prediction starts out from considering a given network and the protein function assignments as a whole and assigning a score based on how well the network and the function assignments agree . in the approach of , each link contributes to this score with a gain or , resp ., if both nodes at the ends of the link have the same function or , and a penalty if they have different function assignments . assuming again that the log - probabilities are proportional to the scores , this induces a probability distribution over all joint function assignments given by where now the normalization factor is calculated by summing over all possible joint function assignments of the nodes .the scoring function can be expressed as with the parameters in terms of statistical mechanics , this describes a ferromagnetic system where the inverse temperature is determined by and an external field by and . again, maximum - likelihood parameter estimation is performed by finding a set of parameters such that the probability of the sample configurations is maximized : the logarithm of the partition sum appearing in the second term can be related to the entropy by the quantity is the thermodynamical free energy .maximum likelihood parameters estimation therefore corresponds to choosing the parameters such that the energy of the given configuration is minimized while the free energy of the system as a whole is maximized : unfortunately , this requires the calculation of both the internal energy , , and the entropy , , of the system and thus more or less a complete solution of the system .this can be avoided by employing the _mean field _approximation , in which the probability distribution is replaced by a trial distribution as a product of single - variable distributions which can be completely parametrized by the expectation values using optimum values for the parameters can then be estimated by minimizing the kl entropy of vs. the true distribution .interestingly , this approximation removes the distinguishing feature of the network approach , namely that the neighborhood structure ( in the sense of neghbors of neighbors ) is taken into account .the resulting equations are very similar to the machine - learning equations in which neighbors are treated as unrelated .the binomial - neighborhood approach is a simpler approach in which the probability distribution is chosen in such a way that it can be directly derived from observed frequencies without the minimization process typical for maximum - likelihood approaches .it is based on the assumption that the distribution of active neighbors of a node follows a binomial distribution whose single probability depends on whether the node is active or not : and correspondingly for using a single probability .this is the assumption of _ local enrichment _ ,i.e. that the probability to find an active node around another active node is larger than the probability to find an active node around an inactive node . using bayes theorem , we can use this to calculate the probability distribution of : where is the overall probability of observing an active node , and the resulting probability distribution can be written as with this can be easily rewritten in the same form as ( [ eq:2 ] ) \ ] ] the first term in the potential has the same form as ( [ eq:3 ] ) and adjusts the overall number of positive sites ; the two other terms constitute a bones for having positive neighbors ( proportional to ) and a penalty for having negative neighbors ( proportional to ) .this approach evidently gives a conditional probability distribution of the same for as the one in the machine - learning approach .however , the coefficient in the potential can be directly calculated from the observed frequencies , , and .this is only possible because we made here the assumption that the probability distribution is binomial .the machine - learning approach is more flexible in that in does not have to make this assumption and yields a true maximum - likelihood estimate even for distributions that deviate greatly from binomial form .in particular , the binomial distribution implies that the neighbors of a node behave statistically independent , which might be violated in a densely connected network , where we would expect clusters to form .to compare the different prediction methods , we chose the mips protein - protein interaction database for _ saccharomyces cerevisiae _ and the go - slim database of protein function assignments from the gene ontology consortium .the latter is a slimmed - down subset of the full gene ontology assignments comprising 32 different processes , 21 functions , and 22 cell compartments .we focused here on the process assignments as these were expected to correspond most closely to the interaction network .we compared four methods : 1 . the binomial neighborhood enrichment from sec .[ sec : bin ] , 2 .the machine - learning maximum - likelihood method from sec .[ sec : ml ] using a linear potential ( [ eq:1 ] ) 3 . the same method with an extended non - linear potential , and 4 . a standard support vector machine . for the probabilistic methods , we first looked at the single - function prediction problem in which the system is presented with a binary assignment expressingwhich proteins are known to have a given function , and then makes a prediction for an unknown protein based on the number of neighbors that have this function .-axis , and the number of neighbors having the funtion of interested on the -axis .the numbers indicate the total incidence of the situation , while the shading expresses how frequently the central node had the function of interest in that situation .the lines are the decision boundaries for the binomial method and the linear and polynomal machine - learning methods .the shading is the prediction region from the svm . ] in this case , the local environment of a node can be described by two numbers : , the number of neighbors , and , the number of neighbors that have the function assignment under consideration .the content of the training data set can be characterized by a glyph plot such as in fig .[ fig:1 ] . after learning the training data ,the probabilistic method has inferred a probability distribution that yields , for each pair , a probability which is then utilized for predictions .the 50%-level of this probability , which determines the prediction in a binary system , is indicated in fig .[ fig:1 ] by green lines .the three probabilistic predictors in fig .[ fig:1 ] yield similar results that differ rarely by more than one box .the main difference is that the binomial predictor is restricted to a straight line , while the linear and non - linear maximum - likelihood predictors can accomodate a little turn .linear and non - linear predictors differ only minimally .[ fig:2 ] finally the prediction from a support vector machine that was trained on the same single - function data set is indicated by a shaded area marking all those for which the svm returned a positive prediction .the border of this area very closely follows the linear and non - linear m.l . predictors . fig .[ fig:2 ] shows a sensitivity - specificity curve using five - fold cross validation for single - function prediction using the probabilistic predictors .again , all three curves follow each other quite closely , with a slight edge for the nonlinear m.l . predictor .the preceding discussion applied to the problem of single function prediction . to perform full prediction, we generated each of the three predictors separately for each function and chose , for each protein with an unknown function , the prediction with the largest probability .for simplicity , this approach does not take into account possible correlations between different protein functions .however , such correlations were taken into account for the support vector machine , which generated a full set of cross - predictors ( predicting function with neighbors of type ) .[ fig:3 ] in the probabilistic case , each predictor does not only provide us with a yes - no decision , but also with a probability for the prediction .we can use the information to restrict the predictions to highly probable ones .[ fig:3 ] shows the accuracy of the prediction as a function of how many predictions are made with different cut - offs in the predicted probability . again, all three curves closely follow each other , with maybe a small but unsignificant edge of the linear m.l . predictor .the predictions from all predictors including the svm were similar , and combining them would not have improved predictive accuracy ..prediction accuracy in five - fold cross validation for the yeast data set .[ cols="<,<,<",options="header " , ] finally , the success rates for all predictors are shown in table [ tab:1 ] using five - fold cross - validation on a data set of 2014 unique function assignments for the yeast proteome .it turns out that all four methods perform closely , with success rates between 30 and 33% .this compares to the null - hypothesis of prediction in a randomized network , in which we would have a success rate of 11% for these data .the protein - protein interaction data therefore roughly triples the prediction success over a random network .however , all methods , from the simple , counting - based binomial classifier to the full support vector machine , perform similarly .we also extended our methods to take larger neighborhoods ( second and higher - order neighbors ) into account , but failed to substantially improve predictive power . finally , we also performed protein function prediction on a recently published protein - interaction network for _ drosophila melanogaster _ , with similar results .we compared different probabilistic approaches to predicting protein functions in protein interaction networks . under closer analysis , the different markov random field methods in the literaturecan be related to a basic machine - learning approach with maximum - likelihood parameter estimation . using real data , they exhibit similar performance , with simple methods performing as well as more complex ones .this might indicate limits on the functional information contained in protein - protein interaction networks .a standard support vector machine gave similar result , though it was equipped with more information , namely the frequencies of all function classes in the neighborhood .the additional information did neither improve nor harm predictive performance .9999 s. letovsky , s. kasif , bioinformatics * 19 * , suppl . 1 , i197 ( 2003 ) . m. deng , t. chen , f. sun , in : proceedings , recomb 03 , 7th international conference on research in computational molecular biology , p. 95, acm press , new york , ny ( 2003 ) .l. giot _ et .science * 302 * , 1727 ( 2003 ) .p. uetz _ et .nature * 403 * , 623 ( 2000 ) . h. w. mewes _ et .nucleic acids research * 32 * , d41 ( 2004 ) .the gene ontology consortium , nucleic acids res * 32 * , d258 ( 2004 ) .chang , c .- j .lin , libsvm : a library for support vector machines , 2001 .software available at * http://www.csie.ntu.edu.tw/ cjlin / libsvm * uetz p , giot l , cagney g , mansfield ta , judson rs , knight jr , lockshon d , narayan v , srinivasan m , pochart p , qureshi - emili a , li y , godwin b , conover d , kalbfleisch t , vijayadamodar g , yang m , johnston m , fields s , rothberg jm .
we discuss probabilistic methods for predicting protein functions from protein - protein interaction networks . previous work based on markov randon fields is extended and compared to a general machine - learning theoretic approach . using actual protein interaction networks for yeast from the mips database and go - slim function assignments , we compare the predictions of the different probabilistic methods and of a standard support vector machine . it turns out that , with the currently available networks , the simple methods based on counting frequencies perform as well as the more sophisticated approaches .
on one hand , termination analysis of logic programs is a fairly established research topic within the logic programming community , see the surveys . for prolog , various tools are now available via web interfaces and we note that the mercury compiler , designed with industrial goals in mind by its implementors , has included two termination analyzers ( see and ) for a few years .on the other hand , non - termination analysis seems to remain a much less attractive subject .we can divide this line of research into two kinds of approaches : dynamic versus static analysis . in the former one , sets up some solid foundations for loop checking , while some recent work is presented in .the main idea is to prune at runtime at least all infinite derivations , and possibly some finite ones . in the latter approach , which includes the work we present in this article , present an algorithm for detecting non - terminating atomic queries with respect to a binary clause of the type .the condition is described in terms of rational trees , while we aim at generalizing non - termination analysis for the generic clp(x ) framework .our analysis shares with some work on termination analysis a key component : the binary unfoldings of a logic program , which transforms a finite set of definite clauses into a possibly infinite set of facts and binary definite clauses . while some termination analyses begin with the analysis of the recursive binary clauses of an upper approximation of the binary unfoldings of an abstract clp(n ) version of the original program , we start from a finite subset of the binary unfoldings of the concrete program ( a larger subset may increase the precision of the analysis , see for some experimental evidence ) .first we detect patterns of non - terminating atomic queries for binary recursive clauses and then propagate this non - termination information to compute classes of atomic queries for which we have a finite proof that there exists at least one infinite derivation with respect to the subset of the binary unfoldings of .the equivalence of termination for a program or its binary unfoldings given in is a corner stone of both analyses .it allows us to conclude that any atomic query belonging to the identified above classes of queries admits an infinite left derivation with respect to .so in this paper , we deliberately choose to restrict the analysis to binary clp clauses and atomic clp queries as the result we obtain can be directly lifted to full clp .our initial motivation , see , is to complement termination analysis with non - termination inside the logic programming paradigm in order to detect optimal termination conditions expressed in a language describing classes of queries .we started from a generalization of the lifting lemma where we may ignore some arguments .for instance , from the clause , we can conclude that the atomic query loops for any term , thus ignoring the second argument .then we have extended the approach , see which gives the full picture of the non - termination analysis , an extensive experimental evaluation , and a detailed comparison with related works .for instance , from the clause , and with the help of the criterion designed in we can now conclude that loops for any term which is an instance of .although we obtained interesting experimental results from such a criterion , the overall approach remains quite syntactic , with an _ ad hoc _ flavor and tight links to some basic logic programming machinery such as the unification algorithm .so we moved to the constraint logic programming scheme : in , we started from a generic definition of the generalization of the lifting lemma we were looking for . such a definition was practically useless but we were able to give a sufficient condition expressed as a logical formula related to the constraint binary clause under consideration .for some constraint domains , we showed that the condition is also necessary .depending on the constraint theory , the validity of such a condition can be automatically decided .moreover , we showed that the syntactic criterion we used in was actually equivalent to the logical criterion and could be considered as a correct and complete implementation specialized for the algebra of finite trees .the main contribution of this article consists in a strict generalization of the logical criterion defined in which allows us to reconstruct the syntactic approaches described in and .we emphasize the improvement with respect to in sect .[ section - special - kind - filter ] ( see example [ ex - stric - generalization ] ) .the paper is organized as follows .first , in sect . [ sect - preliminaries ] , we introduce some preliminary definitions .then , in sect. [ section - loop - inference - with - constraints ] , we recall , using clp terms , the subsumption test to detect looping queries . in sect .[ section - loop - filters ] , we present our generalized criterion for detecting looping queries , whilst in sect . [ section - special - kind - filter ] we consider the connections with the results of .for any non - negative integer , ] . throughout this paper ,we consider a fixed , infinite and denumerable set of variables .a _ signature _ defines a set of function and predicate symbols and associates an _ arity _ with each symbol .if is a first order formula on a signature and is a set of variables , then ( resp . ) denotes the formula ( resp . ) .we let ( resp . ) denote the existential ( resp .universal ) closure of .a _ -structure _ is an interpretation of the symbols in the signature .it is a pair ) ] maps : * each function symbol of arity in to a function : d^n \rightarrow d ] .a _ -valuation _ ( or simply a _ valuation _ if the -structure is understood ) is a mapping .every -valuation extends ( by morphism ) to terms : * (v(t_1),\dots , v(t_n)) ] of formulas to : * : = [ p](v(t_1),\dots , v(t_n)) ] and ] , ] if and only if there exists a valuation such that {v'}=1 ] if and only if {v'}=1 ] and if =0 ] that is the intended interpretation of the constraints .we assume the following : * is ideal , * the predicate symbol is in and is interpreted as identity in , * and correspond on , * is satisfaction complete with respect to , * the theory and the solver agree in the sense that for every , if and only if . consequently , as and correspond on , we have , for every , if and only if .[ example - reals ] the constraint domain has , , , and as predicate symbols , , , , as function symbols and sequences of digits ( possibly with a decimal point ) as constant symbols .only linear constraints are admitted .the domain of computation is the structure with reals as domain and where the predicate symbols and the function symbols are interpreted as the usual relations and functions over reals .the theory is the theory of real closed fields .a constraint solver for always returning either true or false is described in .[ example - lp ] the constraint domain has as predicate symbol and strings of alphanumeric characters as function symbols . the domain of computation of is the set of _ finite trees _( or , equivalently , of finite terms ) , , while the theory is clark s equality theory .the interpretation of a constant is a tree with a single node labeled with the constant .the interpretation of an -ary function symbol is the function mapping the trees , , to a new tree with root labeled with and with , , as child nodes .a constraint solver always returning either true or false is provided by the _ unification _ algorithm .clp( coincides then with logic programming .the signature in which all programs and queries under consideration are included is with and where , the set of predicate symbols that can be defined in programs , is disjoint from .we assume that each predicate symbol in has a unique arity denoted by .an _ atom _ has the form where and is a sequence of -terms . throughout this paper , when we write , we implicitly assume that contains terms . a clp( ) _ program _ is a finite set of rulesrule _ has the form where and are atoms and is a finite conjunction of primitive constraints such that .a _ query _ has the form where is an atom and is a finite conjunction of primitive constraints .given an atom , we write to denote the predicate symbol . given a query , we write to denote the predicate symbol .the set of variables occurring in some syntactic objects is denoted .the examples of this paper make use of the language clp( ) and the language clp( ) . in program and query examples ,variables begin with an upper - case letter , ] denotes an empty list .we consider the following operational semantics given in terms of _ derivations _ from queries to queries .let be a query and be a rule .let be a variant of variable disjoint with such that .then , is a _ derivation step _ of with respect to with as its _input rule_. we write to summarize a finite number ( ) of derivation steps from to where each input rule is a variant of a rule from program .let be a query . a sequence of derivation steps of maximal lengthis called a _ derivation _ of if , , are rules from and if the _ standardization apart _ condition holds , _i.e. _ each input rule used is variable disjoint from the initial query and from the input rules used at earlier steps .we say _ loops _ with respect to if there exists an infinite derivation of .in the logic programming framework , the subsumption test provides a simple way to infer looping queries : if , in a logic program , there is a rule such that is more general than , then the query loops with respect to . in this section, we extend this result to the constraint logic programming framework .a query can be viewed as a finite description of a possibly infinite set of atoms , the arguments of which are values from .suppose that .* the query describes those atoms where is a real and the term can be made equal to while the constraint is satisfied . * the query describes those atoms where and are reals and and can be made equal to and respectively while the constraint is satisfied . in order to capture this intuition ,we introduce the following definition .the set of atoms that is described by a query is denoted by and is defined as : .clearly , if and only if is unsatisfiable in . moreover , two variants describe the same set .notice that the operational semantics we introduced above can be expressed using sets described by queries : [ lemma - operational - sem ] let be a query and be a rule .there exists a derivation step of with respect to if and only if .the more general than " relation we consider is defined as follows : we say that a query is _ more general than _ a query if . * in any constraint domain , is more general than any query verifying ; * in the constraint domain , the query is more general than the query ; * in the constraint domain , the query is more general than the query .suppose we have a derivation step where .then , by lemma [ lemma - operational - sem ] , .hence , if is a query that is more general than , as , we have .so , by lemma [ lemma - operational - sem ] , there exists a query such that .the following lifting result says that , moreover , is more general than : [ theorem - lifting ] consider a derivation step and a query that is more general than .then , there exists a derivation step where is more general than . from this theorem , we derive two corollaries that can be used to infer looping queries just from the text of a clp( ) program : [ coro - p - if - p ] let be a rule. if is more general than then loops with respect to .[ coro - p - if - q ] let be a rule from a program .if loops with respect to then loops with respect to .[ ex - loop - inference - append ] consider the clp( ) rule : ,\mathit{ys } , [ x|\mathit{zs } ] ) \leftarrow \mathit{true } \diamond \mathit{append}(\mathit{xs},\mathit{ys},\mathit{zs})\ ] ] we note that the query is more general than the query ,\mathit{ys } , [ x|\mathit{zs}])}\,|\,{\mathit{true}}\rangle} ] .[ example - set - of - pos ] if we want to distinguish the first argument position of the predicate symbol defined in example [ example - neutral - rlin ] , we set .let be a set of positions .then , is the set of positions defined as : for each predicate symbol , \setminus\tau(p) ] , which is true .[ example - dnlog - term ] suppose that .consider the rule where is the constraint .then , the only local variable of is .any filter where and is dnlog for .indeed , , and is true if and only if has the form .so the formula of definition [ def - log - dn ] turns into \,\big ] , \end{array}\ ] ] which is true .the logical definition of derivation neutrality implies the operational one : [ prop - dnlog - implies - dn ] let be a rule and be a filter .if is dnlog for then is dn for . the reverse implication does not always hold .but when considering a special case of the ( ) condition of _ solution compactness _ given in , we get : [ prop - dn - implies - dnlog2 ] let be a rule and be a filter .assume enjoys the following property : for each , there exists a ground -term such that =\alpha ] .as and are variable disjoint , we have where is the query . as we assumed ( [ eq1-theo - dn - iff - dnlog ] ) , we have to establish that \big] ] .it can be noticed that is -more general than .as is dn for , there exists a query that is -more general than and such that .necessarily , where is a variant of variable disjoint with .as we assumed ( [ eq2-theo - dn - iff - dnlog ] ) , we now have to establish that $ ] holds .this is done using the fact that is -more general than and that . in the constraint domain , dn is equivalent to dnlog .in , we gave , in the scope of logic programming , a syntactic definition of neutral arguments .now we extend this syntactic criterion to the more general framework of constraint logic programming .first , we need rules in flat form : a rule is said to be _ flat _ if has the form where is a sequence of terms and is a sequence of terms such that .notice that there are some rules for which there exists no `` equivalent '' rule in flat form .more precisely , there exists no rule verifying ( take for instance in . ) syntactic derivation neutrality is defined that way : [ def - syn - dn ] let be a filter and be a flat rule .we say that is _ dnsyn _ for if * is more general than , * is more general than , * , * . in example[ example - dnlog - term ] , the rule is flat .moreover , the filter is dnsyn for . a connection between dn , dnsyn and dnlog is as follows : [ prop - dnsyn - implies - dn ] [ prop - dnsyn - dn ]let be a flat rule and be a filter .if is dnsyn for then is dnlog for hence ( by proposition [ prop - dnlog - implies - dn ] ) is dn for .if is dnlog for then * ( dnsyn1 ) * holds .notice that a dnlog filter is not necessarily dnsyn because one of * ( dnsyn24 ) * may not hold : in , consider the flat rule : let where and .then , is dnlog for , but none of * ( dnsyn24 ) * hold .however , in the special case of logic programming , we have : [ theo - dn - dnsyn - log - prog ] suppose that .let be a flat rule and be filter . if is dnlog for then * ( dnsyn3 ) * and * ( dnsyn4 ) * hold .the results of can be easily obtained within the framework presented above .it suffices to consider the following special kind of filter : [ def - does - not - filter ] we say that is an _open filter _ if for all , has the form where is a sequence of distinct variables . in an open filter ,the function does not filter anything " : [ lemma - open - filters1 ] let be an open filter .then , a query is -more general than a query if and only if is more general than .consequently , an open filter is uniquely determined by its set of positions .when reconsidering the definitions and results of the preceding section within such a context , we exactly get what we presented in .in particular , definition [ def - log - dn ] can be rephrased as : [ def - dnlog - open - filter ] a set of positions is _ dnlog _ for a rule if where . as stated in sect [ section - intro ] , the framework presented in this paper is a strict generalization of that of .this is illustrated by the following example .[ ex - stric - generalization ] first , notice that , as is not more general than , corollary [ coro - p - if - p ] does not allow to infer that loops with respect to .let us try to use definition [ def - dnlog - open - filter ] to prove that the argument of is `` irrelevant '' .we let . hence , , and .let us consider a valuation such that , and .so , we have .but we do not have .for instance , if we consider such that and for each variable distinct from , we do not have as the subformula of can not hold , whatever value is assign to .consequently , we do not have , so is not dnlog for . as , by theorem [ prop - dn - implies - dnlog2 ] is not dn for . therefore , using open filters with proposition [ propo - p - if - p - delta ] we are not able to prove that loops with respect to . however , in example [ example - dnlog - term ] , we noticed that any filter where and is dnlog , hence dn , for .moreover , for such a filter , is -more general than .consequently , by proposition [ propo - p - if - p - delta ] , loops with respect to .we have presented a criterion to detect non - terminating atomic queries with respect to a binary clp clause .this criterion generalizes our previous papers in the clp settings and allows us to reconstruct the work we did in the lp framework .however , when switching from lp to clp , we lose the ability to compute , given a binary clause , a useful filter .we plan to work on this and try to define some conditions on the constraint domain which enable the computation of such filters .moreover , as pointed out by an anonymous referee , dnsyn and dnlog seem to be independent notions which we proved to coincide only for open filters with the specific constraint domain . in theorem [ prop - dn - implies - dnlog2 ]we investigate the relationship between dnlog and dn while proposition [ prop - dnsyn - implies - dn ] and proposition [ theo - dn - dnsyn - log - prog ] essentially establish some connections between dnsyn and dnlog .the study of relationship between dnsyn and dn is still missing and we intend to work on this shortly . d. de schreye , k. verschaetse , and m. bruynooghe .a practical technique for detecting non - terminating queries for a restricted class of horn clauses , using directed , weighted graphs . in _ proc .of iclp90 _ , pages 649663 . the mit press , 1990 .f. mesnard , e. payet , and u. neumerkel . detecting optimal termination conditions of logic programs . in m.hermenegildo and g. puebla , editors , _ proc . of the 9th international symposium on static analysis _ ,volume 2477 of _ lecture notes in computer science _ , pages 509525 .springer - verlag , berlin , 2002 .e. payet and f. mesnard .non - termination inference for constraint logic programs . in roberto giacobazzi , editor , _ proc . of the 11th international symposium on static analysis _ ,volume 3148 of _ lecture notes in computer science _ , pages 377392 .springer - verlag , berlin , 2004 . c. speirs , z. somogyi , and h. sndergaard .termination analysis for mercury . in p.van hentenrick , editor , _ proc . of the 1997 intl .symp . on static analysis _ ,volume 1302 of _lncs_. springer - verlag , 1997 .
on one hand , termination analysis of logic programs is now a fairly established research topic within the logic programming community . on the other hand , non - termination analysis seems to remain a much less attractive subject . if we divide this line of research into two kinds of approaches : dynamic versus static analysis , this paper belongs to the latter . it proposes a criterion for detecting non - terminating atomic queries with respect to binary clp clauses , which strictly generalizes our previous works on this subject . we give a generic operational definition and a logical form of this criterion . then we show that the logical form is correct and complete with respect to the operational definition .
our understanding of neutrinos has changed dramatically in the past six years .thanks to many neutrino oscillation experiments involving solar , atmospheric , accelerator and reactor ( anti)-neutrinos , we have learned that neutrinos produced in a well defined flavor eigenstate can be detected , after propagating a macroscopic distance , as a different flavor eigenstate .the simplest interpretation of this phenomenon is that , like all charged fermions , the neutrinos have mass and that , similar to quarks , the neutrino weak , or flavor , eigenstates are different from neutrino mass eigenstates _i.e. _ , neutrinos mix . this new state of affairs has also raised many other issues which did not exist for massless neutrinos : for example , ( i ) massive dirac neutrinos , like charged leptons and quarks , can have nonzero magnetic dipole moments and massive dirac and majorana neutrinos can have nonzero transition dipole moments ; ( ii ) the heavier neutrinos decay into lighter ones , like charged leptons and quarks , and ( iii ) ( most importantly ) the neutrinos can be either majorana or dirac fermions ( see later for details ) .learning about all these possibilities can not only bring our knowledge of neutrinos to the same level as that of charged leptons and quarks , but may also lead to a plethora of laboratory as well as astrophysical and cosmological consequences with far - reaching implications .most importantly , knowing neutrino properties in detail may also play a crucial role in clarifying the blueprint of new physical laws beyond those embodied in the standard model .one may also consider the possibility that there could be new neutrino species beyond the three known ones .in addition to being a question whose answer would be a revolutionary milestone pointing to unexpected new physics , it may also become a necessity if the lsnd results are confirmed by the miniboone experiment , now in progress at fermilab .this would , undoubtedly , be a second revolution in our thinking about neutrinos and the nature of unification .the existence of neutrino masses qualifies as the first evidence of new physics beyond the standard model .the answers to the neutrino - questions mentioned above will add substantially to our knowledge about the precise nature of this new physics , and in turn about the nature of new forces beyond the standard model .they also have the potential to unravel some of the deepest and most long - standing mysteries of cosmology and astrophysics , such as the origin of matter , the origin of the heavy elements , and , perhaps , even the nature of dark energy .active endeavors are under way to launch the era of precision neutrino measurement science , that will surely broaden the horizon of our knowledge about neutrinos .we undertake this survey to pin down how different experimental results expected in the coming decades can elucidate the nature of neutrinos and our quest for new physics . in particular , we would like to know ( i ) the implications of neutrinos for such long - standing ideas as grand unification , supersymmetry , string theory , extra dimensions , etc ; ( ii ) the implications of the possible existence of additional neutrino species for physics and cosmology , and ( iii ) whether neutrinos have anything to do with the origin of the observed matter - antimatter asymmetry in the universe and , if so , whether there is any way to determine this via low - energy experiments .once the answers to these questions are at hand , we will have considerably narrowed the choices of new physics , providing a giant leap in our understanding of the physical universe .this review grew out of a year long study of the future of neutrino physics conducted by four divisions of the american physical society and is meant to be an overview of where we stand in neutrino physics today , where we are going in the next decades and the implications of this new knowledge for the nature of new physics and for the early universe .we apologize for surely missing vast parts of the neutrino literature in our references .we expect this overview to be supplemented by other excellent existing reviews of the subject in the literature .regarding more references and the more experimental aspects of the topics under study , we refer to the other working group reports , the solar and atmospheric experiments , the reactor , the neutrino factory and beta beam experiments and development , the neutrinoless double beta decay and direct searches for neutrino mass and the neutrino astrophysics and cosmology wgs .in particular , we have not discussed theoretical models for neutrino masses except giving a broad outline of ideas and getting beyond it only when there is a need to make some phenomenological point .nonetheless , we hope to have captured in this study the essential issues in neutrino physics that will be relevant as we proceed to the next level in our exploration of this fascinating field .the fact that the neutrino has no electric charge endows it with certain properties not shared by the charged fermions of the standard model .one can write two kinds of lorentz invariant mass terms for the neutrino , dirac and majorana masses , whereas for the charged fermions , conservation of electric charge allows only dirac - type mass terms . in the four component notation for describing fermions ,commonly used for writing the dirac equation for the electron , the dirac mass has the form , connecting fields of opposite chirality , whereas the majorana mass is of the form connecting fields of the same chirality , where is the four component spinor and is the charge conjugation matrix . in the first case ,the fermion is different from its antiparticle , whereas in the latter case it is its own antiparticle .a majorana neutrino implies a whole new class of experimental signatures , the most prominent among them being the process of neutrinoless double beta decay of heavy nuclei , ( ) .since arises due to the presence of neutrino majorana masses , a measurement of its rate can provide very precise information about neutrino masses and mixing , provided ( i ) one can satisfactorily eliminate other contributions to this process that may arise from other interactions in a full beyond - the - standard - model theory , as we discuss below , ( ii ) one can precisely estimate the values of the nuclear matrix elements associated with the in question .the expressions for the dirac and majorana mass terms make it clear that a theory forbids majorana masses for a fermion only if there is an additional global symmetry under which it has nonzero charge .as noted above , for charged fermions such as the electron and the muon , majorana mass - terms are forbidden by the fact that they have nonzero electric charge and the theory has electromagnetic invariance .hence all charged fermions are dirac fermions . on the other hand ,a lagrangian with both majorana and dirac masses describes , necessarily , a pair of majorana fermions , irrespective of how small the majorana mass term is ( although it may prove very difficult to address whether the fermion is of the dirac or the majorana type when the majorana mass - term is significantly smaller than the dirac mass term ) .hence , since the neutrino has no electric charge , the simplest " theories predict that the neutrino is a majorana fermion meaning that a majorana neutrino is more natural ( or at least requires fewer assumptions ) than a dirac neutrino . in most of the discussions belowwe assume that the neutrino is a majorana fermion , unless otherwise noted .we will use a notation where the electroweak - doublet neutrino eigenstate ( defined as the neutrino that is produced in a charged - current weak interaction process associated with a well - defined charged lepton ) is denoted by , with .we will also consider to include a set of possible electroweak - singlet ( sterile " ) neutrinos .corresponding to these neutrino interaction eigenstates are mass eigenstates of neutrinos , .we will order the basis of mass eigenstates so that and , where .the neutrino interaction eigenstates are expressed in terms of the mass eigenstates as follows : , where is a dimensional unitary matrix .for the active neutrinos , with , the relevant submatrix is thus a rectangular matrix with three rows and columns . in seesaw models , the entries in the columns very small , of order , where is a typical dirac mass and is a large mass of a right - handed majorana neutrino .motivated by these models , one commonly assumes a decoupling , so that to good approximation the electroweak - doublet neutrinos can be expressed as linear combinations of just three mass eigenstates , and hence one deals with a truncation of the full neutrino mixing matrix . since only the three electroweak - doublet neutrinos couple to the ,the actual observed lepton mixing matrix that appears in the charged weak current involves the product of the rectangular submatrix of the full lepton mixing matrix with the adjoint of the unitary transformation mapping the mass to weak eigenstates of the charged leptons .thus , the lepton mixing matrix occurring in the charge - lowering weak current has three rows and columns , corresponding to the fact that , in general , a charged lepton couples to a which is a linear combination of mass eigenstates . henceforth ,unless explicitly indicated , we shall assume the above - mentioned decoupling , so that the neutrino mixing matrix is , and will use to refer to the observed lepton mixing matrix , incorporating both the mixings in the neutrino and charged lepton sector .neutrino oscillations and the mixing of two mass eigenstates of neutrinos , and , to form the weak eigenstates and were first discussed by pontecorvo and by maki , nakagawa , and sakata .the truncation of the full neutrino mixing matrix is often called the mns , mnsp , or pmns matrix in honor of these pioneers . for the case of three majorana neutrinos , the lepton mixing matrix can be written as , where will be parametrized as while .neutrino oscillation experiments have already provided measurements for the neutrino mass - squared differences , as well as the mixing angles . at the 3 level ,the allowed ranges are ; ; ; ; .there is currently no constraint on any of the cp odd phases or on the sign of .note that in contrast to the quark sector we have two large angles ( one possibly maximal ) and one small ( possibly zero ) angle . a very important fact about neutrinos that we seem to have learned from solar neutrino datais that neutrino propagation in matter is substantially different from that in vacuum .this effect is known as the msw ( mikheev - smirnov - wolfenstein ) effect and has been widely discussed in the literature .there is however an important aspect of the favored large mixing angle ( lma ) msw solution which needs to be tested in future experiments .the lma solution predicts a rise in the survival probability in the energy region of a few mev as we move down from higher to lower solar neutrino energies . since the present data do not cover this energy region , new solar neutrino data is needed in order to conclusively establish the lma solution .given the current precision of neutrino oscillation experiments and the fact that neutrino oscillations are only sensitive to mass - squared differences , three possible arrangements of the neutrino masses are allowed : 1 .normal hierarchy , i.e. . in this case , and ev . the solar neutrino oscillation involves the two lighter levels .the mass of the lightest neutrino is unconstrained .if , then we find the value of ev .inverted hierarchy , i.e. with ev . in this case , solar neutrino oscillation takes place between the heavier levels and we have .we have no information about except that its value is much less than the other two masses .3 . degenerate neutrinos i.e. , .the behaviors of masses for different mass patterns are shown in fig . 1 .+ oscillation experiments can not tell us about the overall scale of masses .it is therefore important to explore to what extent the absolute values of the masses can be determined . while discussing the question of absolute masses , it is good to keep in mind that none of the methods discussed below can provide any information about the lightest neutrino mass in the cases of a normal or inverted mass - hierarchy .they are most useful for determining absolute masses in the case of degenerate neutrinos _i.e. , _ when all ev ._ neutrino mass from beta decay _ one can directly search for the kinematic effect of nonzero neutrino masses in beta - decay by modifications of the kurie plot .this search is sensitive to neutrino masses regardless of whether the neutrinos are dirac or majorana particles .these may be due to the emission , via mixing , of massive neutrinos that cause kinks in this plot .if the masses are small , then the effects will occur near to the end point of the electron energy spectrum and will be sensitive to the quantity .the mainz and troitsk experiments place the present upper limit on ev and 2.2 ev , respectively .the proposed katrin experiment is projected to be sensitive to ev , which will have important implications for the theory of neutrino masses .for instance , if the result is positive , it will imply a degenerate spectrum ; on the other hand a negative result will be a very useful constraint .another sensitive probe for the absolute scale of the neutrino masses is the search for neutrinoless double beta decay , , whose rate is potentially measurable if the neutrinos are majorana fermions and is large enough , or if there are new lepton number violating interactions . in the absence of new lepton numberviolating interactions , a positive sign of would allow one to measure .either way , we would learn that the neutrinos are majorana fermions .however , if is very small , and there are new lepton number violating interactions , neutrinoless double beta decay will measure the strength of the new interactions ( such as doubly charged higgs fields or r - parity violating interactions ) rather than neutrino mass .there are many examples of models where new interactions can lead to a decay rate in the observable range without at the same time yielding a significant majorana mass for the neutrinos . as a result, one must be careful in interpreting any nonzero signal in experiments and not jump to the conclusion that a direct measurement of neutrino mass has been made .the way to tell whether such a nonzero signal is due to neutrino masses or is a reflection of new interactions is to supplement decay results with collider searches for these new interactions .thus collider experiments , such as those at lhc , and double beta experiments play complementary roles . the present best upper bounds on decay lifetimes come from the heidelberg - moscow and the igex experiments and can be translated into an upper limit on ev .there is a claim of discovery of neutrinoless double beta decay of enriched experiment by the heidelberg - moscow collaboration .interpreted in terms of a majorana mass of the neutrino , this implies between 0.12 ev to 0.90 ev .if confirmed , this result is of fundamental significance . for a thorough discussions of this result ( see also ) , we refer readers to the report of the double beta decay working group . a very different way to get information on the absolute scale of neutrino masses is from the study of the cosmic microwave radiation spectrum as well as the study of the large scale structure in the universe . a qualitative way of understanding why this is the case is that if neutrinos are present in abundance in the universe at the epoch of structure formation and have a sizable mass the formation of structure is affected .for instance , for a given neutrino mass , all structure on a scale smaller than a certain value given by the inverse of neutrino mass is washed away by neutrino free - streaming .this implies a reduced power on smaller scales .thus , accurate measurements of the galaxy power spectrum for small scales can help constrain or determine neutrino masses .recent results from the wmap and surveys of large scale structure have set a limit on the sum of neutrino masses ev .more recent results from the sloan digital sky survey ( sdss ) place the limit of ev .hannestad has emphasized that these upper limits can change if there are more neutrino species e.g. for 5 neutrinos , ev if they are in equilibrium at the epoch of bbn .a point worth emphasizing is that the above result is valid for both majorana and dirac neutrinos as long as the `` right - handed '' neutrinos decouple sufficiently earlier than the bbn epoch and are not regenerated subsequently .these limits already provide nontrivial information about neutrino masses : the limit ev , if taken at face value , implies that each individual neutrino mass is smaller than ev , which is similar to the projected sensitivity of the proposed katrin experiment .planck satellite observations are expected to be sensitive to even smaller values of , thereby providing a completely independent source of information on neutrino masses .these results may have implications for models of sterile neutrinos that attempt to explain the lsnd results .it is clear from eq .that , for majorana neutrinos , there are three cp - odd phases that characterize neutrino mixings , and our understanding of the leptonic sector will remain incomplete without knowledge of these .there are two possible ways to explore cp phases : ( i ) one way is to perform long - baseline oscillation experiments and look for differences between neutrino and anti - neutrino survival probabilities ; ( ii ) another way is to use possible connections with cosmology .it has often been argued that neutrinoless double beta decay may also provide an alternative way to explore cp violation .this is discussed in sec .[ sec:0nubbcp ] . in summary ,the most important goals of the next phase of neutrino oscillation experiments are : \(i ) to determine the value of as precisely as possible ; \(ii ) to determine the sign of , or the character of the neutrino mass hierarchy ; \(iii ) to improve the accuracy of the measurement of the other angles and the mass - squared differences ; \(iv ) to probe the existence of the three cp odd phases as best as possible .the discussion above assumes a minimal picture for massive neutrinos where the most general majorana mass for three neutrinos has been added .while this may be the picture to the leading order , it is quite conceivable that there are other interesting subdominant effects that go beyond this .it is of utmost interest to determine to what extent one can constrain ( or perhaps discover ) these new nonstandard phenomena , since their absence up to a certain level ( or , of course , their presence ) will provide crucial insight into the detailed nature of the new physics . as an example of what we can learn from future experiments, we focus on three experiments searches for neutrinoless double beta decay ( down to the level of ev level ) , studies to determine the sign of , and the katrin experiment , which is sensitive to the effects of a nonzero neutrino mass down to ev in tritium beta decay . the interplay between the possible results of these three experimentsis summarized in table [ tab : natureofneutrinos ] .different possible conclusions regarding the nature of the neutrinos and their mass hierarchy from the three complementary experiments . [ cols="^,^,^,^",options="header " , ] from the table it becomes clear that the mixing plus decoherence scenario in the antineutrino sector can easily account for all the available experimental information , including lsnd .it is important to stress once more that our sample point was not obtained through a scan over all the parameter space , but by an educated guess , and therefore plenty of room is left for improvements . on the other hand , for the mixing - only / no - decoherence scenario , we have taken the best fit values of the state of the art analysis and therefore no significant improvements are expected . at this pointa word of warning is in order : although superficially it seems that scenario ( d ) , decoherence plus mixing in both sectors , provides an equally good fit , one should remember that including decoherence effects in the neutrino sector can have undesirable effects in solar neutrinos , especially due to the fact that decoherence effects are weighted by the distance traveled by the neutrino , something that may lead to seizable ( not observed ! ) effects in the solar case .one might wonder then , whether decohering effects , which affect the antineutrino sector sufficiently to account for the lsnd result , have any impact on the solar - neutrino related parameters , measured through antineutrinos in the kamland experiment . in order to answer this question, it will be sufficient to calculate the electron survival probability for kamland in our model , which turns out to be , in perfect agreement with observations .it is also interesting to notice that in our model , the lsnd effect is not given by the phase inside the oscillation term ( which is proportional to the solar mass difference ) but rather by the decoherence factor multiplying the oscillation term .therefore the tension between lsnd and karmen data is naturally eliminated , because the difference in length leads to an exponential suppression .having said that , it is now clear that decoherence models ( once neutrino mixing is taken into account ) are the best ( and arguably the only ) way to explain all the observations including the lsnd result .this scenario , which makes dramatic predictions for the upcoming neutrino experiments , expresses a strong observable form of cpt violation in the laboratory , and in this sense , our fit gives a clear answer to the question asked in the introduction as to whether the weak form of cpt invariance is violated in nature .it seems that , in order to account for the lsnd results , we should invoke such a decoherence - induced cpt violation , which however is independent of any mass differences between particles and antiparticles .this cpt violating pattern , with equal mass spectra for neutrinos and antineutrinos , will have dramatic signatures in future neutrino oscillation experiments .the most striking consequence will be seen in miniboone , according to our picture , miniboone will be able to confirm lsnd only when running in the antineutrino mode and not in the neutrino one , as decoherence effects live only in the former .smaller but experimentally accessible signatures will be seen also in minos , by comparing conjugated channels ( most noticeably , the muon survival probability ) .higher energy neutrino beams or long - baseline experiments , will have significant deviations from the non - decoherence models , as our effects scale with energy and distance traveled , being therefore the best tool to explore decoherence models .if the neutrino masses are actually related to decoherence as a result of quantum gravity , this may have far reaching consequences for our understanding of the early stages of our universe , and even the issue of dark energy that came up recently as a result of astrophysical observations on a current acceleration of the universe from either distant supernovae data or measurements on cosmic microwave background temperature fluctuations from the wmap satellite .indeed , decoherence implies an absence of a well - defined scattering s - matrix , which in turn would imply cpt violation in the strong form . a positive cosmological _ constant _ will also lead to an ill definition of an s - matrix , precisely due to the existence , in such a case , of an asymptotic - future de sitter ( inflationary ) phase of the universe , with hubble parameter , implying the existence of a cosmic ( hubble ) horizon .this in turn will prevent a proper definition of pure asymptotic states .we would like to point out at this stage that the claimed value of the dark energy density component of the ( four - dimensional ) universe today , , with gev ( the planck mass scale ) , can actually be accounted for ( in an amusing coincidence ? ) by the scale of the neutrino mass differences used in order to explain the oscillation experiments .indeed , ^4\sim 10^{-122 } m_p^4 ] and \mathrm{sm}=0.0301 ] , all sm observables will be affected through the fermi constant which is no longer equal to the muon decay constant : this shift in will destroy the excellent agreement between the sm and -pole observables . however , since always appears in the combination in neutral current amplitudes , the agreement can be recovered by absorbing the shift in into a shift in , or equivalently , in the oblique correction parameter .indeed , it was shown in ref . , that the -pole , nutev , and mass data can all be fit with the oblique correction parameters , , , and a flavor universal suppression parameter , the best fit values given by for a reference sm with .therefore , for this class of models to work , neutrino mixing with heavy gauge singlet states must be accompanied by new physics contributions to , , and .the values of and can be accommodated within the sm by simply increasing the higgs mass to hundreds of gev , but the mass requires a large and positive parameter which can not be generated within the sm .thus , the models are not complete until some mechanism is found which explains the mass .but then , if the sm is fit to the mass alone , the preferred higgs mass is far below direct search limits , which could be an indication that the mass requires new physics regardless of nutev . at first blush ,the preferred value of above is also problematic .this implies a large mixing angle , , if interpreted as due to mixing with a single heavy state .the commonly accepted seesaw mechanism relates the mixing angle to the ratio of the neutrino masses : choosing and ( is needed to suppress ) we find the mixing angle orders of magnitude too small : .however , this result does not mean that it is impossible to have a large enough mixing angle between the light and heavy states .as pointed out in ref . , in models with more than one generation , the generic mass matrix includes enough degrees of freedom to allow us to adjust all the masses and mixings independently .concrete examples of models with large mass hierarchies and large mixing angles can be found in refs . . what is sacrificed , however , is the traditional seesaw explanation of the small neutrino mass : i.e. since the majorana mass in the neutrino mass matrix should be of the order of the gut scale , the neutrino mass is naturally suppressed if the dirac mass is comparable to that of the other fermions .an alternative mechanism is used in ref . . there, an intergenerational symmetry is imposed on the neutrino mass texture which reduces its rank , generating naturally light ( massless ) mass eigenstates . abandoning the seesaw mechanismalso frees the masses of the heavy states from being fixed at the gut scale . indeed , in the model discussed in ref . , the assumption that neutrinos and up - type quarks have a common dirac mass implies that the masses of the heavy state could be a few tev , well within the reach of the lhc . without quark - lepton unification be even lighter , rendering them accessible to tevatron run ii .because of the large mixing angles between the light and heavy states in this class of models , flavor changing processes mediated by the heavy states may be greatly enhanced . as a result, stringent constraints can be placed on the models from the experimental limits on , , - conversion in nuclei , muonium - antimuonium oscillation , etc .for instance , the mega limit on leads to the constraint therefore , lepton universality among the must be broken maximally . shows that it is possible to fit the -pole , nutev , and lepton universality data while satisfying this condition .the meg ( mu - e - gamma ) experiment at psi plans to improve upon the mega limit by about two orders of magnitude .the meco ( muon on electron conversion ) experiment at brookhaven aims to improve the limits on conversion in nuclei by three orders of magnitude .further constraints can be obtained from muon , and the violation of ckm unitarity . the nutev anomaly ,even if it does not ultimately endure sustained scrutiny , stirs us to look past orthodoxies in our model - building ( seesaw , susy , guts , ... ) and to ask broadly what is permitted by the data .the neutrino mixing solution is relatively conservative in its use of the neutrino sector to address the nutev question .nonetheless , it makes interesting predictions about new particles at lhc , can be probed by a wide range of neutrino oscillation experiments , precision measurements and rare decay searches , and introduces an alternative to the seesaw paradigm . whether this or another solution resolves the nutev anomaly , the nutev result serves to focus the imagination of the theorist on the opportunities presented by the experiments .in this report , we have presented a brief review of the present knowledge of neutrino physics and what we can learn from the planned experiments in the next decade .three very important measurements that are guaranteed to have a significant impact on the search for physics beyond the standard model are : ( i ) the rate of , which will inform us not only whether the neutrino is a majorana or dirac particle but may also provide information about the neutrino masses ; ( ii ) the value of , which will considerably narrow the field of flavor models and ( iii ) the sign of the , which determines the neutrino mass hierarchy and will also help guide our understanding of flavor physics . within the three neutrino picture , more precise measurements of the solar and atmospheric mixing angles will be helpful in discriminating among various new physics possibilities .important though somewhat model - dependent constraints can be drawn from experimental searches for charged lepton flavor violating processes , such as or conversion in nuclei , and from searches for nonzero electric dipole moments of leptons .keep in mind that the matter - antimatter symmetry of the universe may have its explanation in the very same mechanism that generates the small neutrino masses , and that we may be able to test this hypothesis with enough low - energy information . beyond the three neutrino picture , a very important issue is the status of the lsnd result and whether the existence of light sterile neutrinos can be inferred from terrestrial neutrino oscillations experiments .the results of miniboone , assuming they confirms those from lsnd , have the potential to revolutionize our current understanding of neutrinos .even if miniboone does not confirm the lsnd result , sterile neutrino effects can still be present in different channels at a more subdominant level , as has been suggested in several theoretical models .another important issue in neutrino physics is the magnetic moment of the neutrino , which is expected at to be nonzero but very small within the standard picture of ev sized neutrino masses and in the absence of new physics at the tev scale .thus , evidence for a nonzero neutrino magnetic moment close to the current astrophysical limit of would have to be interpreted as evidence of tev scale new physics such as tev scale left - right models , horizontal models , or large extra dimensions . other unique probes of tev scale physics are provided by neutrino oscillation experiments , thanks to their sensitivity to non - standard neutrino interactions .the work of r.n.m . is supported by the national science foundation grant no .phy-0099544 and phy-0354401 .the work of w.r . and m.l .is supported by the `` deutsche forschungsgemeinschaft '' in the `` sonderforschungsbereich 375 fr astroteilchenphysik '' and under project number ro-2516/3 - 1 ( w.r . ) . the work of m.r .is partially supported by the eu 6th framework program mrtn - ct-2004 - 503369 `` quest for unification '' and mrtn - ct-2004 - 005104 `` forcesuniverse '' .the work of r.s . is supported in part by the nsf grant nsf - phy-00 - 98527 .the work of j.k . is supported by the `` impuls- und vernetzungsfonds '' of the helmholtz assciation , contract number vh - ng-006 .the work of s.a . is supported by the pparc grant ppa / g / o/2002/00468 .the work of a.d.g . is sponsored in part by the us department of energy contract de - fg02 - 91er40684 .we thank f. vissani for participating in the shorter version of this report .# 1#2#3phys .* b#1 * , # 2 ( # 3 ) # 1#2#3nucl .* b#1 * , # 2 ( # 3 ) # 1#2#3phys .* d#1 * , # 2 ( # 3 ) # 1#2#3phys .# 1 * , # 2 ( # 3 ) # 1#2#3mod .* a#1 * , # 2 ( # 3 ) # 1#2#3phys .* # 1 * , # 2 ( # 3 ) # 1#2#3science * # 1 * , # 2 ( # 3 ) # 1#2#3astrophys . j. * # 1 * , # 2 ( # 3 ) # 1#2#3eur .j. * c#1 * , # 2 ( # 3 ) # 1#2#3jhep * # 1 * , # 2 ( # 3 ) # 1#2#3prog .* # 1 * , # 2 ( # 3 ) b.t .et al . _ ,j. * 496 * ( 1998 ) 505 ; y. fukuda _ et al ._ , phys .lett . * 77 * ( 1996 ) 1683 ; v. gavrin , nucl .proc . suppl .* 91 * ( 2001 ) 36 ; w. hampel _ et al ._ , phys .* b447 * ( 1999 ) 127 ; m. altmann _ et al ._ , phys .lett . * b490 * ( 2000 ) 16 ; super - kamiokande coll . , y. fukuda _ et al ._ , phys . rev* 86 * ( 2001 ) 5651 ; q. r. ahmad _ et al ._ [ sno collaboration ] , phys . rev .* 87 * ( 2001 ) 071301 [ nucl - ex/0106015 ] ; q. r. ahmad _ et al ._ [ sno collaboration ] , phys .* 89 * ( 2002 ) 011301 [ nucl - ex/0204008 ] ; s. n. ahmed _ et al . _[ sno collaboration ] , phys .lett . * 92 * , 181301 ( 2004 ) [ nucl - ex/0309004 ] ; k. eguchi _ et al . _[ kamland collaboration ] , phys .* 90 * ( 2003 ) 021802 [ hep - ex/0212021 ] ; y. fukuda _ et al . _[ super - kamiokande collaboration ] , phys .lett . *81 * , 1562 ( 1998 ) [ hep - ex/9807003 ] .w. allison et al . ,b391 * , 491 ( 1997 ) ; phys . lett . *b449 * , 137 ( 1999 ) .reviews of solar neutrinos include j. bahcall s url http://www.ias.edu/ jnb and a review of atmospheric neutrino data is c. k. jung , c. mcgrew , t. kajita , and t. mann , ann .nucl . part .* 51 * , 451 ( 2001 ) ; see also the web site http://neutrinooscillation.org .r. n. mohapatra and p. b. pal , _ massive neutrinos in physics and astrophysics _ , 3rd ed .( world scientific , singapore , 2004 ) ; v. barger , k. whisnant and d. marfatia , int .* e12 * , 569 ( 2003 ) ; c. gonzales - garcia and y. nir , rev .75 * , 345 ( 2003 ) ; a. smirnov , int .j. mod .a * 19 * , 1180 ( 2004 ) [ hep - ph/0311259 ] ; s. pakvasa and j. w. f. valle , hep - ph/0301061 ; s. t. petcov , hep - ph/0412410 ; m. fukugita and t. yanagida , _ physics of neutrinos and applications in astrophysics _ ( springer , berlin , 2003 ) . for some earlier reviews , see s. m. bilenky , c. giunti and w. grimus , prog . part .( 1999 ) 1 ; f. boehm and p. vogel , _ physics of massive neutrinos _ ( 2nd ed . , cambridge univ . press ,cambridge , 1992 ) ; c. w. kim and a. pevsner , _ neutrinos in physics and astrophysics _ ( harwood , reading , 1992 ) ; s. m. bilenky and s. t. petcov , rev . mod . phys . * 59 * , 67 ( 1987 ) .see also a recent focus issue on neutrinos by `` new journal of physics '' edited by f. halzen , m. lindner and a. suzuki ; there is an extensive web site that not only reviews the history of the early developments in the field but also provides a very up - to - date list of references of the important papers maintained by c. giunti and marco leveder ; entitled `` neutrino unbound '' at http://www.nu.to.infn.it/. b. pontecorvo , zh . eksp .teor . fiz .* 33 * ( 1957 ) 549 and * 34 * ( 1958 ) 247 ; z. maki , m. nakagawa and s. sakata , prog .* 28 * ( 1962 ) 870 ( discussed the case ) ; b. pontecorvo , zh .* 53 * ( 1967 ) 1717 ; v. n. gribov and b. pontecorvo , phys . lett .* b 28 * , 493 ( 1969 ) . for recent analyses ,see j. n. bahcall , m. c. gonzalez - garcia and c. pena - garay , jhep * 0408 * , 016 ( 2004 ) [ arxiv : hep - ph/0406294 ] ; c. gonzalez - garcia and m. maltoni , hep - ph/0406056 ; m. maltoni , t. schwetz , m. a. tortola and j. w. f. valle , hep - ph/0405172 ; a. bandyopadhyay _ et al ._ , hep - ph/0406328 ; s. goswami , a. bandyopadhyay and s. choubey , hep - ph/0409224 ; g. fogli , e. lisi , in _ altarelli , g. ( ed . )et al . : neutrino mass , 2003 _ , pp . 135 .m. apollonio _ et al .b466 * ( 1999 ) 415 ; f. boehm _et al . _ ,phys . rev . * d62 * ( 2000 ) 072002 . l. wolfenstein , phys .rev.*d 17 * , 2369 ( 1978 ) ; s. p. mikheyev and a. smirnov , nuovo cimento * c 9 * , 17 ( 1986 ) .v. barger , r. phillips and k. whisnant , phys .d 34 * , 980 ( 1986 ) ; h. bethe , phys .lett . * 56 * , 1305 ( 1986 ) ; w. c. haxton , phys .lett . * 57 * , 1271 ( 1986 ) ; s. parke , phys .. lett . * 57 * , 1275 ( 1986 ) ; s. p. rosen and j. gelb , phys . rev . *d34 * , 969 ( 1986 ) ; t. k. kuo and j. pantaleone , phys .lett . * 57 * , 1805 ( 1986 ) ; p. langacker _et al . _ ,b * 282 * ( 1987 ) 589 ; s.t .petcov , phys . lett .* b200 * ( 1988 ) 373 ; a. friedland , phys .rev . d*64 * ( 2001 ) 013008 ; e. lisi et al .d * 63 * ( 2001 ) 093002 . g. m. fuller , j. r. primack and y. z. qian , phys . rev .d * 52 * , 1288 ( 1995 ) [ arxiv : astro - ph/9502081 ] ; d. o. caldwell and r. n. mohapatra , phys .b * 354 * , 371 ( 1995 ) [ arxiv : hep - ph/9503316 ] ; g. raffelt and j. silk , phys .b * 366 * , 429 ( 1996 ) [ arxiv : hep - ph/9502306 ] .d. caldwell and r. n. mohapatra , phys .d 48 * , 3259 ( 1993 ) ; a. joshipura , phys . rev .* d51 * , 1321 ( 1995 ) ; k. s. babu , e. ma and j. w. f. valle , hep - ph/0206292 ; s. antusch and s. f. king , nucl .b * 705 * ( 2005 ) 239 , hep - ph/0402121 . c. kraus _ et al ._ , eur . phys .j. c * 40 * , 447 ( 2005 ) [ arxiv : hep - ex/0412056 ] . v. m. lobashev _ et al ._ , phys .b * 460 * , 227 ( 1999 ) .a. osipowicz _ et al ._ [ katrin collaboration ] , hep - ex/0109033 .bilenky and s.t .petcov in . s. r. elliott and p. vogel , ann .nucl . part .* 52 * ( 2002 ) 115 ; a. morales and j. morales , nucl .suppl . * 114 * , 141 ( 2003 ) [ hep - ph/0211332 ] .r. n. mohapatra and j. vergados , phys .47 * , 1713 ( 1981 ) ; r. n. mohapatra , phys . rev .* d 34 * , 3457 ( 1986 ) ; b. brahmachari and e. ma , phys . lett . * b536 * , 259 ( 2002 ) .h. v. klapdor - kleingrothaus _ et al ._ , eur .phys . j. a * 12 * , 147 ( 2001 ) [ hep - ph/0103062 ] .c. e. aalseth _[ 16ex collaboration ] , phys .d * 65 * , 092007 ( 2002 ) [ hep - ex/0202026 ] .g. l. fogli _ et al ._ , phys .d * 70 * , 113003 ( 2004 ) . h. v. klapdor - kleingrothaus , i. v. krivosheina , a. dietz and o. chkvorets , phys .b * 586 * , 198 ( 2004 ) [ hep - ph/0404088 ] .h. v. klapdor - kleingrothaus , a. dietz , h. l. harney and i. v. krivosheina , mod .a16 ( 2001 ) 2409 - 2420 [ hep - ph/0201231 ] .b. kayser , in _ cp violation _c. jarlskog ( world scientific , 1988 ) ; z .- z .xing , int .j. mod .a * 19 * , 1 ( 2004 ) [ hep - ph/0307359 ]. a. de gouva , b. kayser and r. n. mohapatra , phys.rev . * d67 * , 053004 ( 2003 ) .h. minakata , h. nunokawa and s. parke , phys . rev . *d66 * , 093012 ( 2002 ) [ hep - ph/0208163 ] ; j. burguet - castell , m. b. gavela , j. j. gomez - cadenas , p. hernandez and o. mena , nuclb646 * ( 2002 ) 301 ( 2002 ) ; h. minakata , hep - ph/0402197 and references therein .d. caldwell and r. n. mohapatra , phys . rev . *d 46 * , 3259 ( 1993 ) ; j. peltoniemi and j. w. f. valle , nucl .* b 406 * , 409 ( 1993 ) ; j. peltoniemi , d. tommasini and j. w. f. valle , phys .* b 298 * , 383 ( 1993 ) .s. bilenky , w. grimus , c. giunti and t. schwetz , hep - ph/9904316 ; v. barger , b. kayser , j. learned , t. weiler and k. whisnant , phys .* b 489 * , 345 ( 2000 ) ; for a review , see s. bilenky , c. giunti and w. grimus , prog .phys . * 43 * , 1 ( 1999 ) .s. l. glashow , _ the future of elementary particle physics _ , in _ proceedings of the 1979 cargse summer institute on quarks and leptons_ ( m. lvy , j .- l .basdevant , d. speiser , j. weyers , r. gastmans , and m. jacob , eds . ) , plenum press , new york , 1980 , pp .687713 .r. n. mohapatra and r. e. marshak , proceedings of the _ orbis scientiae , january , 1980 _ , p. 277( plenum press , ed . b. korsonoglu et al ) ; j. schechter and j. w. f. valle , phys .d * 22 * , 2227 ( 1980 ) .t. p. cheng and l. f. li , phys .d * 22 * , 2860 ( 1980 ) .m. raidal , hep - ph/0404046 ; h. minakata and a. y. smirnov , hep - ph/0405088 ; p. h. frampton and r. n. mohapatra , hep - ph/0407139 ; w. rodejohann , phys .d * 69 * , 033005 ( 2004 ) ; j. ferrandis and s. pakvasa , hep - ph/0412038 s. antusch , s. f. king and r. n. mohapatra , arxiv : hep - ph/0504007 ; s. k. kang , c. s. kim and j. lee , hep - ph/0501029 ; n. li and b. q. ma , hep - ph/0501226 ; k. cheung , s. k. kang , c. s. kim and j. lee , hep - ph/0503122 ; t. ohlsson , hep - ph/0506094 .leptonic symmetries and of neutrino masses were discussed early on in s. t. petcov , phys .* b 110 * , 245 ( 1982 ) ; the specific combination were discussed in r. barbieri , l. hall , d. smith , a. strumia and n. weiner , jhep * 12 * , 017 ( 1998 ) ; a. joshipura and s. rindani , eur .phys.j . * c14 * , 85 ( 2000 ) ; r. n. mohapatra , a. perez - lorenzana , c. a. de s. pires , phys . lett . *b474 * , 355 ( 2000 ) ; t. kitabayashi and m. yasue , phys . rev . *d63 * , 095002 ( 2001 ) ; phys . lett . *b508 * , 85 ( 2001 ) ; phys .. * b524 * , 308 ( 2002 ) [ hep - ph/0110303 ] ; l. lavoura , phys . rev . *d62 * , 093011 ( 2000 ) ; w. grimus and l. lavoura , phys . rev . *d62 * , 093012 ( 2000 ) ; j. high energy phys .09 , 007 ( 2000 ) ; j. high energy phys .07 , 045 ( 2001 ) ; r. n. mohapatra , phys . rev .* d64 * , 091301 ( 2001 ) [ hep - ph/0107264 ] ; k. s. babu and r. n. mohapatra , phys . lett . *b532 * , 77 ( 2002 ) ; h. s. goh , r. n. mohapatra and s. p. ng , phys .* b542 * , 116 ( 2002 ) [ hep - ph/0205131 ] ; d. a. dicus , h .- j .he , j. n. ng , phys .lett . * b536* , 83 ( 2002 ) ; q. shafi and z. tavartkiladze , phys . lett . *b482 * , 1451 ( 2000 ) ; s. t. petcov and w. rodejohann , hep - ph/0409135. k. s. babu , c. n. leung , and j. pantaleone , phys .b319 * ( 1993 ) , 191198 [ hep - ph/9309223 ]. k. r. s. balaji , a. s. dighe , r. n. mohapatra , and m. k. parida , phys .* 84 * ( 2000 ) , 50345037 [ hep - ph/0001310 ] .s. antusch and m. ratz , jhep * 0211 * , 010 ( 2002 ) [ arxiv : hep - ph/0208136 ] .r. n. mohapatra , m. k. parida and g. rajasekaran , phys .d * 69 * , 053007 ( 2004 ) [ hep - ph/0301234 ] . c. hagedorn , j. kersten and m. lindner , phys .b * 597 * , 63 ( 2004 ) [ arxiv : hep - ph/0406103 ] . c. h. albright , k. s. babu and s. m. barr , phys .lett . * 81 * , 1167 ( 1998 ) [ hep - ph/9802314 ] .see , for example , m. frigerio and a.yu .smirnov , nucl .b * 640 * , 233 ( 2002 ) ; phys .d * 67 * , 013007 ( 2003 ) .v. barger , s.l .glashow , p. langacker and d. marfatia , phys .b * 540 * , 247 ( 2002 ) .s. pascoli , s. t. petcov and w. rodejohann , phys .b * 549 * , 177 ( 2002 ) .a. broncano , m. b. gavela , e. jenkins , nucl . phys . *b672 * , 163 ( 2003 ) .p. h. frampton , s. t. petcov and w. rodejohann , nucl . phys .b * 687 * , 31 ( 2004 ) [ hep - ph/0401206 ] . for recent studiessee , for example , g. altarelli , f. feruglio and i. masina , nucl .b * 689 * , 157 ( 2004 ) ; a. romanino , hep - ph/0402258 ; and references therein .s. antusch and s. f. king , phys .b * 591 * ( 2004 ) 104 , hep - ph/0403053 .hall , h. murayama and n. weiner , phys .lett . *84 * , 2572 ( 2000 ) ; n. haba and h. murayama , phys . rev .d * 63 * , 053010 ( 2001 ) .a. de gouva and h. murayama , phys .b * 573 * , 94 ( 2003 ) .a. de gouva , phys .d * 69 * , 093007 ( 2004 ) [ arxiv : hep - ph/0401220 ] .f. vissani , hep - ph/9708483 ; v. barger , s. pakvasa , t. weiler and k. whisnant , phys . lett .b * 437 * , 107 ( 1998 ) ; a. baltz , a.s .goldhaber and m. goldhaber , phys .81 * 5730 ( 1998 ) ; g. altarelli and f. feruglio , phys .b * 439 * , 112 ( 1998 ) ; m. jezabek and y. sumino , phys .b * 440 * , 327 ( 1998 ) ; d. v. ahluwalia , mod . phys . lett . *a13 * , 2249 ( 1998 ) . t. fukuyama and h. nishiura , hep - ph/9702253; r. n. mohapatra and s. nussinov , phys . rev . * d60 * ( 1999 ) 013002 ; e. ma and m. raidal , phys . rev . lett .* 87 * ( 2001 ) 011802 ; c. s. lam , hep - ph/0104116 ; t. kitabayashi and m. yasue , phys.rev . * d67 * 015006 ( 2003 ) ; w. grimus and l. lavoura , hep - ph/0305046 ; 0309050 ; y. koide , phys.rev .* d69 * ( 2004 ) 093001 ; a. ghosa , mod .. lett . * a 19 * ( 2004 ) 2579 .r. n. mohapatra , slac summer inst .lecture ; http://www-conf.slac.stanford.edu/ssi/2004 ; hep - ph/0408187 ; jhep , * 10 * , 027 ( 2004 ) ; w. grimus , a. s.joshipura , s. kaneko , l. lavoura , h. sawanaka , m. tanimoto , hep - ph/0408123 ; r. n. mohapatra and w. rodejohann , phys . rev .d * 72 * , 053001 ( 2005 ) [ arxiv : hep - ph/0507312 ] .p. f. harrison , d. h. perkins and w. g. scott , phys .b * 458 * , 79 ( 1999 ) ; phys .b * 530 * , 167 ( 2002 ) ; p. f. harrison and w. g. scott , phys .b * 535 * , 163 ( 2002 ) ; phys .b * 557 * , 76 ( 2003 ) ; z. z. xing , phys .b * 533 * , 85 ( 2002 ) .x. g. he and a. zee , phys .b * 560 * , 87 ( 2003 ) ; e. ma , phys .lett . * 90 * , 221802 ( 2003 ) ; phys .b * 583 * , 157 ( 2004 ) ; c. i. low and r. r. volkas , phys . rev .d * 68 * , 033007 ( 2003 ) ; s. h. chang , s. k. kang and k. siyeon , phys .b * 597 * , 78 ( 2004 ) ; e. ma , phys .d * 70 * , 031901 ( 2004 ) ; f. plentinger and w. rodejohann , phys .b * 625 * , 264 ( 2005 ) ; p. f. harrison and w. g. scott , hep - ph/0508012 ; s. luo and z. z. xing , hep - ph/0509065 ; w. grimus and l. lavoura , hep - ph/0509239 ; originally , a very similar , but with recent data incompatible form of has been proposed already in l. wolfenstein , phys .d * 18 * , 958 ( 1978 ) .g. altarelli , f. ferruglio , hep - ph/0504165 ; e. ma , hep - ph/0505209 ; s. f. king , jhep * 0508 * , 105 ( 2005 ) ; i. de medeiros varzielas and g. g. ross , hep - ph/0507176 ; k. s. babu and x. g. he , hep - ph/0507217 . v. barger and k. whisnant , phys . lett .* b456 * ( 1999 ) 194 ; h. minakata and o. yasuda , nucl . phys . *b523 * ( 1998 ) 597 ; t. fukuyama _et al . _ ,phys . rev .* d57 * ( 1998 ) 5844 and mod .. lett . *a17 * ( 2002 ) 2597 ; p. osland and g. vigdel , phys . lett .* b520 * ( 2001 ) 128 ; d. falcone and f. tramontano , _ phys .rev . _ * d64 * ( 2001 ) 077302 ; t. hambye , eur . phys . j. direct * c4 * ( 2002 ) 13 ; f. vissani , jhep * 06 * ( 1999 ) 022 ; m. czakon __ , phys . lett .* b465 * ( 1999 ) 211 , hep - ph/0003161 and phys . rev .* d65 * ( 2002 ) 053008 ; h. v. klapdor - kleingrothaus , h. pas and a. yu .smirnov , phys .* d63 * ( 2001 ) 073005 ; h. minakata and h. sugiyama , phys . lett . *b526 * ( 2002 ) 335 ; n. haba , n. nakamura and t. suzuki , hep - ph/0205141 . j. f. nieves and p. b. pal , phys .d * 36 * , 315 ( 1987 ) , and phys .d * 64 * , 076005 ( 2001 ) .g. c. branco , l. lavoura and m. n. rebelo , phys .b * 180 * , 264 ( 1986 ) .j. a. aguilar - saavedra and g. c. branco , phys .d * 62 * , 096009 ( 2000 ) . for :r. bolton et al ., phys . rev . *d 38 * ( 1988 ) 2077 ; m.l .brooks et al .[ mega collaboration ] , phys .lett . * 83 * ( 1999 ) 1521 , for : s. ahmed et al .[ cleo collaboration ] , phys . rev . *d 61 * ( 2000 ) 071101 , hep- for : e.d .commins , s.b .ross , d. demille , b.c .regan , phys . rev . * a 50 * ( 1994 ) 2 for : cern - mainz - daresbury collaboration , nucl . phys . *b 150 * ( 1979 ) 1 . for et al . , proposal for an experiment at psi , http://meg.web.psi.ch ; for : d.f .carvalho , j.r .ellis , m.e .gomez , s. lola and j.c .romao , hep - ph/0206148 ; for : s.k .lamoreaux , nucl - ex/0109014 .j. aysto et al . , hep - ph/0109217 ; y.k .semertzidis , hep - ex/0401016 . for :r. carey et al . , letter of intent to bnl ( 2000 ) ; y.k .semertzidis et al . , hep - ph/0012087 ; j. aysto et al . , hep - ph/0109217 ; f. gabbiani , e. gabrielli , a. masiero , l. silvestrini , nucl* b 477 * ( 1996 ) 321 , hep - ph/9604387 . for a recent re - analysis see : i. masina and c.a .savoy , nucl .* b 661 * ( 2003 ) 365 , hep - ph/0211283 .r. barbieri , s. ferrara , c.a .savoy , phys .* b 119 * ( 1982 ) 343 ; a. chamsheddine , r. arnowitt , p. nath , phys .* 49 * ( 1982 ) 970 ; l. hall , j. lykken , s. weinberg , phys . rev .* d 27 * ( 1983 ) 2359 ; a. chamsheddine , r. arnowitt , p. nath , n=1 supergravity , world scientific , singapore ( 1984 ) ; n. ohta , prog .70 ( 1983 ) 542 .j. hisano , t. moroi , k. tobe and m. yamaguchi , phys .d * 53 * ( 1996 ) 2442 [ arxiv : hep - ph/9510309 ] ; s. f. king and m. oliveira , phys .d * 60 * ( 1999 ) 035003 [ arxiv : hep - ph/9804283 . s. t. petcov , s. profumo , y. takanishi and c. e. yaguna , nucl .b * 676 * , 453 ( 2004 ) [ hep - ph/0306195 ] .s. lavignac , i. masina and c.a .savoy , phys .* b 520 * ( 2001 ) 269 , hep - ph/0106245 ; i. masina , in proceedings of susy02 _ supersymmetry and unification of fundamental interactions _ , vol.1 331 , hamburg ( 2002 ) , hep - ph/0210125 .see for instance : j. sato , k. tobe and t. yanagida , phys .* b 498 * ( 2001 ) 189 , hep - ph/0010348 ; t. blazek and s.f .king , nucl .* b 662 * ( 2003 ) 359 , hep - ph/0211368 ; m. ciuchini , a. masiero , l. silvestrini , s.k .vempati , o. vives , phys .* 92 * ( 2004 ) 071801 , hep - ph/0307191 ; s.m .barr , phys .* b 578 * ( 2004 ) 394 , hep - ph/0307372 ; m .- c . chen and k. t. mahanthappa , hep - ph/0409096 ; hep - ph/0409165 .w. buchmller , d. delepine and f. vissani , phys . lett .* b 459 * ( 1999 ) 171 , hep - ph/9904219 ; j. sato and k. tobe , phys . rev . *d 63 * ( 2001 ) 116010 ; k.s .babu , ts .enkhbat , i. gogoladze , nucl . phys .* b 678 * ( 2004 ) 233 , hep - ph/0308093 ; i. masina and c.a .savoy , hep - ph/0501166 .t. blazek and s. f. king , phys .b * 518 * ( 2001 ) 109 [ arxiv : hep - ph/0105005 ] .smirnov , phys .d 48 * ( 1993 ) 3264 , hep - ph/9304205 ; g. altarelli , f. feruglio , i. masina , phys .* b 472 * ( 2000 ) 382 , hep - ph/9907532 .m. fukugita and t. yanagida , .g. t hooft , phys .lett . * 37 * , 8 ( 1976 ) .m. flanz , e. a. paschos and u. sarkar , phys .b * 345 * ( 1995 ) 248 [ erratum - ibid .b * 382 * ( 1996 ) 447 ] ; l. covi , e. roulet and f. vissani , phys . lett .b * 384 * ( 1996 ) 169 .g. f. giudice _et al . _ ,b * 685 * , 89 ( 2004 ) [ arxiv : hep - ph/0310123 ] .v. a. kuzmin , v. a. rubakov , and m. e. shaposhnikov , phys .b * 155 * , 36 ( 1985 ) ; j. a. harvey and m. s. turner , phys .d * 42 * , 3344 ( 1990 ) .j. r. ellis , j. hisano , s. lola and m. raidal , nucl .b * 621 * , 208 ( 2002 ) [ arxiv : hep - ph/0109125 ] .s. davidson and a. ibarra , jhep * 0109 * , 013 ( 2001 ) [ arxiv : hep - ph/0104076 ] .m. fujii , k. hamaguchi and t. yanagida , phys .d * 65 * , 115012 ( 2002 ) [ arxiv : hep - ph/0202210 ] .s. davidson and a. ibarra , phys .b * 535 * ( 2002 ) 25 [ hep - ph/0202239 ] .w. buchmller , p. di bari and m. plmacher , nucl .b * 665 * ( 2003 ) 445 [ hep - ph/0302092 ] . p. h. frampton , s. l. glashow and t. yanagida , phys .b * 548 * , 119 ( 2002 ) [ hep - ph/0208157 ] ; t. endoh , s. kaneko , s. k. kang , t. morozumi and m. tanimoto , phys .lett . * 89 * , 231601 ( 2002 ) [ hep - ph/0209020 ] .r. kuchimanchi and r. n. mohapatra , phys .d * 66 * , 051301 ( 2002 ) [ hep - ph/0207110 ] ; m. raidal and a. strumia , phys .b * 553 * , 72 ( 2003 ) [ hep - ph/0210021 ] ; b. dutta and r. n. mohapatra , phys .d * 68 * , 056006 ( 2003 ) [ hep - ph/0305059 ] ; a. ibarra and g. g. ross , hep - ph/0312138 ; s. f. king , phys .d * 67 * , 113010 ( 2003 ) [ hep - ph/0211228 ] . g. c. branco , r. gonzalez felipe , f. r. joaquim and m. n. rebelo , nucl .b * 640 * ( 2002 ) 202 [ hep - ph/0202030 ] ; h. b. nielsen and y. takanishi , nucl .b * 636 * , 305 ( 2002 ) [ arxiv : hep - ph/0204027 ] ; m. s. berger and k. siyeon , phys .d * 65 * ( 2002 ) 053019 [ hep - ph/0110001 ] ; d. falcone and f. tramontano , phys .d * 63 * ( 2001 ) 073007 [ hep - ph/0011053 ] .s. kaneko and m. tanimoto , phys .b * 551 * ( 2003 ) 127 [ hep - ph/0210155 ] ; s. kaneko , m. katsumata and m. tanimoto , jhep * 0307 * ( 2003 ) 025 [ hep - ph/0305014 ] ; l. velasco - sevilla , jhep * 10 * ( 2003 ) 035 , [ hep - ph/0307071 ] ; v. barger , d. a. dicus , h. j. he and t. li , phys .b583 ( 2004 ) 173 [ hep - ph/0310278 ] ; w. rodejohann , eur .j. c * 32 * , 235 ( 2004 ) ; b. dutta and r. n. mohapatra , hep - ph/0307163 ; r. n. mohapatra , s. nasri and h. b. yu , phys .b * 615 * , 231 ( 2005 ) [ arxiv : hep - ph/0502026 ] .m. y. khlopov and a. d. linde , phys .b * 138 * , 265 ( 1984 ) ; + j. r. ellis , j. e. kim and d. v. nanopoulos , phys .b * 145 * , 181 ( 1984 ) ; + m. kawasaki and t. moroi , prog .phys . * 93 * , 879 ( 1995 ) [ arxiv : hep - ph/9403364 ] ; + m. kawasaki , k. kohri and t. moroi , phys .d * 63 * , 103502 ( 2001 ) [ arxiv : hep - ph/0012279 ] .t. moroi , h. murayama and m. yamaguchi , phys .b * 303 * , 289 ( 1993 ) ; + a. de gouva , t. moroi and h. murayama , phys .d * 56 * , 1281 ( 1997 ) [ arxiv : hep - ph/9701244 ] .r. h. cyburt , j. r. ellis , b. d. fields and k. a. olive , phys .d * 67 * , 103521 ( 2003 ) [ arxiv : astro - ph/0211258 ] ; + m. kawasaki , k. kohri and t. moroi , arxiv : astro - ph/0408426 .h. p. nilles , m. peloso and l. sorbo , phys .lett . * 87 * , 051302 ( 2001 ) [ arxiv : hep - ph/0102264 ] ; + h. p. nilles , k. a. olive and m. peloso , phys .b * 522 * , 304 ( 2001 ) [ arxiv : hep - ph/0107212 ] .m. bolz , a. brandenburg and w. buchmller , nucl .b * 606 * , 518 ( 2001 ) [ arxiv : hep - ph/0012052 ] .g. lazarides and q. shafi , phys . lett .b * 258 * ( 1991 ) 305 ; + k. kumekawa , t. moroi and t. yanagida , prog .phys . * 92 * , 437 ( 1994 ) [ arxiv : hep - ph/9405337 ] ; + g. lazarides , r. k. schaefer and q. shafi , phys .d * 56 * , 1324 ( 1997 ) [ arxiv : hep - ph/9608256 ] ; + g. lazarides , springer tracts mod .phys . * 163 * , 227 ( 2000 ) [ arxiv : hep - ph/9904428 ] ; + g. f. giudice , m. peloso , a. riotto and i. tkachev , jhep * 9908 * , 014 ( 1999 ) [ arxiv : hep - ph/9905242 ] ; + t. asaka , k. hamaguchi , m. kawasaki and t. yanagida , phys . lett .b * 464 * , 12 ( 1999 ) [ arxiv : hep - ph/9906366 ] ; + t. asaka , k. hamaguchi , m. kawasaki and t. yanagida , phys .d * 61 * , 083512 ( 2000 ) [ arxiv : hep - ph/9907559 ] ; + m. kawasaki , m. yamaguchi and t. yanagida , phys .d * 63 * , 103514 ( 2001 ) [ arxiv : hep - ph/0011104 ] .h. murayama , h. suzuki , t. yanagida and j. yokoyama , phys .lett . * 70 * , 1912 ( 1993 ) ; + h. murayama and t. yanagida , phys .b * 322 * , 349 ( 1994 ) [ arxiv : hep - ph/9310297 ] ; + k. hamaguchi , h. murayama and t. yanagida , phys .d * 65 * ( 2002 ) 043512 [ hep - ph/0109030 ] ; + for a review see , e.g. , k. hamaguchi , arxiv : hep - ph/0212305 m. fujii , m. ibe and t. yanagida , phys . lett .b * 579 * , 6 ( 2004 ) [ arxiv : hep - ph/0310142 ] ; + j. l. feng , s. su and f. takayama , phys .d * 70 * , 075019 ( 2004 ) ; + l. roszkowski and r. ruiz de austri , arxiv : hep - ph/0408227 .w. buchmller , k. hamaguchi and m. ratz , phys .b * 574 * , 156 ( 2003 ) [ arxiv : hep - ph/0307181 ] ; + w. buchmller , k. hamaguchi , o. lebedev and m. ratz , nucl .b * 699 * , 292 ( 2004 ) [ arxiv : hep - th/0404168 ] ; + r. kallosh and a. linde , jhep * 0412 * , 004 ( 2004 ) [ arxiv : hep - th/0411011 ] ; + w. buchmller , k. hamaguchi , o. lebedev and m. ratz , jcap * 0501 * , 004 ( 2005 ) [ arxiv : hep - th/0411109 ] .m. bolz , w. buchmller and m. plmacher , phys .b * 443 * , 209 ( 1998 ) [ arxiv : hep - ph/9809381 ] ; + m. fujii and t. yanagida , phys .b * 549 * , 273 ( 2002 ) [ arxiv : hep - ph/0208191 ] ; + m. fujii , m. ibe and t. yanagida , phys .d * 69 * , 015006 ( 2004 ) [ arxiv : hep - ph/0309064 ] ; + j. r. ellis , d. v. nanopoulos and s. sarkar , nucl . phys .b * 259 * , 175 ( 1985 ) .w. buchmller , k. hamaguchi , m. ratz and t. yanagida , phys .b * 588 * , 90 ( 2004 ) [ arxiv : hep - ph/0402179 ] ; + g. weiglein _ et al ._ , arxiv : hep - ph/0410364 .a. s. joshipura and e. a. paschos , hep - ph/9906498 ; a. s. joshipura , e. a. paschos and w. rodejohann , nucl .b * 611 * , 227 ( 2001 ) .a. s. joshipura , e. a. paschos and w. rodejohann , jhep * 0108 * , 029 ( 2001 ) ; w. rodejohann , phys .b * 542 * , 100 ( 2002 ) .s. nasri , j. schechter and s. moussa , hep - ph/0402176 .w. rodejohann , phys .d * 70 * , 073010 ( 2004 ) [ hep - ph/0403236 ] ; p. h. gu and x. j. bi , hep - ph/0405092 ; n. sahu and s. uma sankar , arxiv : hep - ph/0406065 .w. l. guo , hep - ph/0406268 .t. hambye , e. ma and u. sarkar , nucl .b * 602 * , 23 ( 2001 ) .g. lazarides , phys .b * 452 * , 227 ( 1999 ) .r. n. mohapatra , a. perez - lorenzana and c. a. de sousa pires , phys .b * 474 * , 355 ( 2000 ) .k. dick , m. lindner , m. ratz and d. wright , phys .lett . * 84 * , 4039 ( 2000 ) [ hep - ph/9907562 ]. b. a. campbell , s. davidson , j. r. ellis and k. a. olive , phys .b * 297 * , 118 ( 1992 ) [ arxiv : hep - ph/9302221 ] .n. arkani - hamed , l. j. hall , h. murayama , d. r. smith and n. weiner , phys .d * 64 * , 115011 ( 2001 ) [ hep - ph/0006312 ] .f. borzumati and y. nomura , phys .d * 64 * , 053005 ( 2001 ) [ hep - ph/0007018 ] .h. murayama and a. pierce , phys .lett . * 89 * , 271601 ( 2002 ) [ hep - ph/0206177 ] .r. n. mohapatra and x. m. zhang , phys . rev .* d 46 * , 5331 ( 1992 ) ; l. boubekeur , t. hambye and g. senjanovi , phys . rev . lett .* 93 * ( 2004 ) 111601 [ arxiv : hep - ph/0404038 ] ; n. sahu and u. yajnik , hep - ph/0410075 ; s. f. king and t. yanagida , hep - ph/0411030 . s. f. king , phys .b * 439 * , 350 ( 1998 ) [ hep - ph/9806440 ] ; s. f. king , nucl .b * 562 * , 57 ( 1999 ) [ hep - ph/9904210 ] ; s. f. king , nucl .b * 576 * ( 2000 ) 85 [ hep - ph/9912492 ] ; s. f. king , jhep * 0209 * , 011 ( 2002 ) [ hep - ph/0204360 ] ; for a review see s. antusch and s. f. king , new j. phys .* 6 * ( 2004 ) 110 [ hep - ph/0405272 ] .e. k. akhmedov , m. frigerio and a. y. smirnov , jhep * 0309 * , 021 ( 2003 ) [ arxiv : hep - ph/0305322 ] . s. f. king and g. g. ross , phys .b * 520 * ( 2001 ) 243 [ arxiv : hep - ph/0108112 ] ; s. f. king and g. g. ross , phys .b * 574 * ( 2003 ) 239 [ arxiv : hep - ph/0307190 ] .h. murayama , h. suzuki , t. yanagida and j. yokoyama , phys .* 70 * ( 1993 ) 1912 ; j. r. ellis , m. raidal and t. yanagida , phys .b * 581 * , 9 ( 2004 ) [ hep - ph/0303242 ] .s. antusch , m. bastero - gil , s. f. king and q. shafi , hep - ph/0411298 .k. matsuda , y. koide and t. fukuyama , phys .d * 64 * , 053015 ( 2001 ) [ hep - ph/0010026 ] ; k. matsuda , hep - ph/0401154 ; k. matsuda , y. koide , t. fukuyama and h. nishiura , phys .d * 65 * , 033008 ( 2002 ) ; t. fukuyama and n. okada , jhep * 0211 * , 011 ( 2002 ) ; b. bajc , g. senjanovic and f. vissani , hep - ph/0402140 ; b. dutta , y. mimura and r. n. mohapatra , phys.rev .* d69 * , 115014 ( 2004 ) ; phys.lett . *b603 * , 35 ( 2004 ) ; phys.rev.lett.*94 * , 091804 ( 2005 ) ; hep - ph/0507319 ; t. fukuyama , a. ilakovac , t. kikuchi , s. meljanac and n. okada , hep - ph/0401213 , hep - ph/0405300 ; b. bajc , a. melfo , g. senjanovic and f. vissani , phys .d * 70 * , 035007 ( 2004 ) ; c. s. aulakh and a. girdhar , hep - ph/0204097 ; s. bertolini and m. malinsky , phys . rev .d * 72 * , 055021 ( 2005 ) [ arxiv : hep - ph/0504241 ] ; b. bajc , g. senjanovic and f. vissani , phys .lett . * 90 * , 051802 ( 2003 ) ; h. s. goh , r. n. mohapatra and s. p.ng , phys . lett .b * 570 * , 215 ( 2003 ) [ hep - ph/0303055 ] ; h. s. goh , r. n. mohapatra and s. p. ng , phys . rev . * d68* , 115008 ( 2003 ) [ hep - ph/0308197 ] . k. s. babu , j. c. pati and f. wilczek , hep - ph/9812538 , nucl . phys . *b566 * , 33 ( 2000 ) ; c. albright and s. m. barr , phys . rev . lett . * 85 * , 244 ( 2001 ) ; t. blazek , s. raby and k. tobe , phys . rev . *d62 * , 055001 ( 2000 ) ; z. berezhiani and a. rossi , nucl . phys . *b594 * , 113 ( 2001 ) ; r. kitano and y. mimura , phys . rev . *d63 * , 016008 ( 2001 ) ; for a recent review , see c. albright , hep - ph/0212090 .p. h. chankowski and z. pluciennik , phys .b * 316 * , 312 ( 1993 ) [ arxiv : hep - ph/9306333 ] .s. antusch , m. drees , j. kersten , m. lindner and m. ratz , phys .b * 519 * , 238 ( 2001 ) [ arxiv : hep - ph/0108005 ] .s. antusch , m. drees , j. kersten , m. lindner and m. ratz , phys .b * 525 * , 130 ( 2002 ) [ arxiv : hep - ph/0110366 ] .s. antusch and m. ratz , jhep * 07 * ( 2002 ) , 059 [ arxiv : hep - ph/0203027 ] .h. chankowski , w. krolikowski , and s. pokorski , phys . lett .* b473 * ( 2000 ) , 109 [ arxiv : hep - ph/9910231 ] .j. a. casas , j. r. espinosa , a. ibarra , and i. navarro , nucl .b573 * ( 2000 ) , 652 [ arxiv : hep - ph/9910420 ] . s. antusch , j. kersten , m. lindner , and m. ratz , nucl . phys .* b674 * ( 2003 ) , 401 [ arxiv : hep - ph/0305273 ] . j. w. mei and z. z. xing , phys .d * 69 * , 073003 ( 2004 ) [ arxiv : hep - ph/0312167 ] .s. luo , j. w. mei and z. z. xing , phys .d * 72 * , 053014 ( 2005 ) [ arxiv : hep - ph/0507065 ] .j. r. ellis and s. lola , phys .b * 458 * , 310 ( 1999 ) [ arxiv : hep - ph/9904279 ] .j. a. casas , j. r. espinosa , a. ibarra and i. navarro , nucl .b * 556 * , 3 ( 1999 ) [ arxiv : hep - ph/9904395 ] .j. a. casas , j. r. espinosa , a. ibarra and i. navarro , nucl .b * 569 * , 82 ( 2000 ) [ arxiv : hep - ph/9905381 ] .r. adhikari , e. ma and g. rajasekaran , phys .b * 486 * , 134 ( 2000 ) [ arxiv : hep - ph/0004197 ] .a. s. joshipura and s. d. rindani , phys .b * 494 * , 114 ( 2000 ) [ arxiv : hep - ph/0007334 ] .e. j. chun , phys .b * 505 * , 155 ( 2001 ) [ arxiv : hep - ph/0101170 ] .a. s. joshipura , s. d. rindani and n. n. singh , nucl .b * 660 * , 362 ( 2003 ) [ arxiv : hep - ph/0211378 ] .a. s. joshipura and s. d. rindani , phys .d * 67 * , 073009 ( 2003 ) [ arxiv : hep - ph/0211404 ] .n. n. singh and m. k. das , ( 2004 ) , hep - ph/0407206 .n. haba and n. okamura , eur .j. c * 14 * , 347 ( 2000 ) [ arxiv : hep - ph/9906481 ] .t. miura , e. takasugi and m. yoshimura , prog .phys . * 104 * , 1173 ( 2000 ) [ arxiv : hep - ph/0007066 ]. n. haba , y. matsui , and n. okamura , eur .j. * c17 * , 513 ( 2000 ) [ arxiv : hep - ph/0005075 ] ; n. haba , y. matsui , n. okamura , and m. sugiura , prog .phys . * 103 * , 145 ( 2000 ) [ arxiv : hep - ph/9908429 ] .p. h. chankowski and s. pokorski , int .j. mod .a * 17 * , 575 ( 2002 ) [ arxiv : hep - ph/0110249 ] .t. miura , t. shindou , and e. takasugi , phys . rev .* d66 * ( 2002 ) , 093002 [ arxiv : hep - ph/0206207 ] . c. w. chiang , phys .d * 63 * , 076009 ( 2001 ) [ arxiv : hep - ph/0011195 ] .m. lindner , m. ratz and m. a. schmidt , arxiv : hep - ph/0506280 .s. f. king and n. n. singh , nucl .b591 * ( 2000 ) , 325 [ arxiv : hep - ph/0006229 ] .s. antusch , j. kersten , m. lindner , and m. ratz , phys . lett .* b538 * ( 2002 ) , 87 [ arxiv : hep - ph/0203233 ] .s. antusch , j. kersten , m. lindner , and m. ratz , phys . lett .* b544 * ( 2002 ) , 1 [ arxiv : hep - ph/0206078 ] .w . mei and z .- z .xing , phys .rev . * d70 * ( 2004 ) , 053002 [ arxiv : hep - ph/0404081 ] . j. ellis , a. hektor , m. kadastik , k. kannike and m. raidal , arxiv : hep - ph/0506122 .s. antusch , j. kersten , m. lindner , m. ratz and m. a. schmidt , jhep * 0503 * , 024 ( 2005 ) [ arxiv : hep - ph/0501272 ] . j. w. mei , phys .d * 71 * , 073012 ( 2005 ) [ arxiv : hep - ph/0502015 ] .g. dutta , arxiv : hep - ph/0203222 .g. bhattacharyya , a. raychaudhuri and a. sil , phys .d * 67 * ( 2003 ) , 073004 [ arxiv : hep - ph/0211074 ] .t. miura , t. shindou and e. takasugi , phys .d * 68 * ( 2003 ) , 093009 [ arxiv : hep - ph/0308109 ] .t. shindou and e. takasugi , phys .d * 70 * ( 2004 ) , 013005 [ arxiv : hep - ph/0402106 ] .j. a. casas , j. r. espinosa and i. navarro , phys .* 89 * ( 2002 ) , 161801 [ arxiv : hep - ph/0206276 ] . j. a. casas , j. r. espinosa , and i. navarro , jhep * 09 * ( 2003 ) , 048 [ arxiv : hep - ph/0306243 ] . p. h. chankowski , a. ioannisian , s. pokorski , and j. w. f. valle , phys . rev . lett .* 86 * ( 2001 ) , 3488 [ arxiv : hep - ph/0011150 ] .chen and k. t. mahanthappa , int .j. mod .a * 16 * , 3923 ( 2001 ) [ arxiv : hep - ph/0102215 ] .k. s. babu , e. ma and j. w. f. valle , phys .b * 552 * ( 2003 ) , 207 [ arxiv : hep - ph/0206292 ] .n. haba , y. matsui , n. okamura and t. suzuki , phys .b * 489 * ( 2000 ) , 184 [ arxiv : hep - ph/0005064 ] .t. k. kuo , s. h. chiu and g. h. wu , eur .j. c * 21 * ( 2001 ) , 281 [ arxiv : hep - ph/0011058 ] .r. gonzalez felipe and f. r. joaquim , jhep * 0109 * ( 2001 ) , 015 [ arxiv : hep - ph/0106226 ] . m. k. parida , c. r. das and g. rajasekaran , pramana * 62 * , 647 ( 2004 ) [ arxiv : hep - ph/0203097 ] . k. r. s. balaji , a. s. dighe , r. n. mohapatra , and m. k. parida , phys . lett .* b481 * ( 2000 ) , 33 [ arxiv : hep - ph/0002177 ] . k. r. s. balaji , r. n. mohapatra , m. k. parida and e. a. paschos , phys .d * 63 * , 113002 ( 2001 ) [ arxiv : hep - ph/0011263 ] .k. s. babu and r. n. mohapatra , phys .b532 * ( 2002 ) , 77 [ arxiv : hep - ph/0201176 ] .s. antusch , p. huber , j. kersten , t. schwetz and w. winter , phys .d * 70 * ( 2004 ) 097302 , hep - ph/0404268 .w. grimus and l. lavoura , arxiv : hep - ph/0410279 .r. barbieri , p. creminelli , a. strumia and n. tetradis , nucl .b * 575 * , 61 ( 2000 ) [ hep - ph/9911315v3 ] .w. buchmller and m. plmacher , phys .b * 511 * , 74 ( 2001 ) [ hep - ph/0104189 ] .m. cirelli , g. marandella , a. strumia and f. vissani , hep - ph/0403158 .t.d . lee and c.n .yang , phys . rev . * 104 * , 254 ( 1956 ) , a. salam , nuovo cim . * 5 * , 299 ( 1957 ) , v. kobzarev , l. okun , and i. pomeranchuk , sov.j.nucl.phys . * 3 * , 837 ( 1966 ) .okun , sov.phys .jetp * 52 * , 351 ( 1980 ) , s.i .blinnikov and m. yu .khlopov , sov . astron .jour . * 60 * , 632 ( 1983 ) , b. holdom , phys .b166 * , 196 ( 1985 ) , s.l .glashow , phys .b167 * , 35 ( 1986 ) , e.d .carlson and s.l .glashow , phys .b193 * , 168 ( 1987 ) , m.yu .et al . _ ,sov . astron .jour . * 68 * , 42 ( 1991 ) , e. kolb , d. seckel and m. turner , nature , * 514 * , 415 ( 1985 ) , z.k .silagadze , mod .a * 14 * , 2321 ( 1999 ) and acta phys .b * 32 * , 99 ( 2001 ) .r. foot , h. lew and r.r .volkas , phys . lett . *b271 * , 67 ( 1991 ) and mod .a7 , 2567 ( 1992 ) , r. foot , mod .a9 , 169 ( 1994 ) , r. foot and r.r .volkas , phys . rev .d 52 , 6595 ( 1995 ) .z. g. berezhiani and r. n. mohapatra , phys .d * 52 * , 6607 ( 1995 ) , z.g .berezhiani , a.d .dolgov and r.n .mohapatra , phys . lett . *b375 * , 26 ( 1996 ) , z.g .berezhiani , acta phys .polonica , * b27 * , 1503 ( 1996 ) , r.n .mohapatra and v.l .teplitz , astrophys .j. * 478 * , 29 ( 1997 ) , z. berezhiani , d. comelli and f.l .villante , phys .* b503 * , 362 ( 2001 ) .k. benakli and a. y. smirnov , phys .lett . * 79 * , 4314 ( 1997 ) z. chacko and r. n. mohapatra , phys .d * 61 * , 053002 ( 2000 ) [ arxiv : hep - ph/9905388 ] ; m. frank , m. sher and i. turan , arxiv : hep - ph/0412090 ; hep - ph/0503084 .s. m. bilenky , c. giunti and w. grimus , eur .j. c * 1 * , 247 ( 1998 ) a. strumia , phys . lett .b * 539 * , 91 ( 2002 ) s. m. bilenky , s. pascoli and s. t. petcov , phys .d * 64 * , 113003 ( 2001 ) [ arxiv : hep - ph/0104218 ] ; r. n. mohapatra , s. nasri and h. b. yu , phys .d * 72 * , 033007 ( 2005 ) .v. s. berezinsky and a. vilenkin , phys .d * 62 * , 083512 ( 2000 ) m. maltoni , t. schwetz , m. a. tortola and j. w. f. valle , hep - ph/0305312 , hep - ph/0405172 .gonzalez - garcia , m. maltoni , c. pena - garay , phys .rev . * d64 * , 093001 ( 2001 ) ; m. maltoni , t. schwetz , m. trtola and j. w. f. valle , nucl . phys . * b643 * , 321 ( 2002 ) ; h. ps , l. song , t. weiler , phys . rev . * d67 * , 073019 ( 2003 ) .r. abela et al . , phys . lett . *b105 * , 263 ( 1981 ) ; f. calaprice et al ., phys . lett . *b106 * , 175 ( 1981 ) ; r. minehart et al .lett . * 52 * , 804 ( 1984 ) ; m. daum et al ., phys . rev . * d36 * , 2624 ( 1987 ) .d. bryman et al .lett . * 50 * , 1546 ( 1983 ) ; g. azuelos et al .lett . * 56 * , 2241 ( 1986 ) ; n. de leener - rosier et al ., phys . rev . * d43 * , 3611 ( 1991 ) ; d. britton et al . , phys* d46 * , r885 ( 1992 ) .d. bryman and t. numao , phys . rev .* d53 * , 558 ( 1996 ) ; j. formaggio et al .lett . * 84 * , 443 ( 2000 ) ; m. daum et al .lett . * 85 * , 1515 ( 2000 ) p. astier , phys . lett . *b527 * , 23 ( 2002 ) .some early accelerator experiments include f. bergsma et al . , phys .b128 * , 361 ( 1983 ) ; a. cooper - sarkar et al . , * b160 * , 267 ( 1985 ) ; j. dorenbos et al . , phys . lett . *b166 * , 473 ( 1986 ) ; g. bernardi et al ., phys . lett . *b166 * , 479 ( 1986 ) ; l. oberauer et al . ,b198 * , 113 ( 1987 ) ; g. bernardi et al ., phys . lett . *b203 * , 332 ( 1988 ) .a more recent search is a. vaitaitis et al .lett . * 83 * , 4943 ( 1999 ) .see for further references .a. kusenko , g. segre , phys . lett . *b396 * , 197 ( 1997 ) ; g. fuller , a. kusenko , i. mocioiu , s. pascoli , phys . rev . *d68 * , 103002 ( 2003 ) .y. grossman and s. rakshit , phys .d * 69 * , 093002 ( 2004 ) , hep - ph/0311310 .y. grossman and h. e. haber , phys .d * 59 * , 093008 ( 1999 ) [ hep - ph/9810536 ] .a. s. joshipura and m. nowakowski , phys .d * 51 * , 2421 ( 1995 ) [ hep - ph/9408224 ] s. davidson and m. losada , jhep * 0005 * , 021 ( 2000 ) [ hep - ph/0005080 ]. s. davidson and m. losada , phys .d * 65 * , 075025 ( 2002 ) [ hep - ph/0010325 ] .r. n. mohapatra , prog . in part . andnucl . phys .31 , 39 ( 1993 ) .n. v. krasnikov , phys .b * 388 * , 783 ( 1996 ) [ hep - ph/9511464 ] .n. arkani - hamed , h. c. cheng , j. l. feng and l. j. hall , phys .lett . * 77 * , 1937 ( 1996 ) [ hep - ph/9603431 ] .n. arkani - hamed , j. l. feng , l. j. hall and h. c. cheng , nucl .b * 505 * , 3 ( 1997 ) [ hep - ph/9704205 ] .l. e. ibanez , f. marchesano and r. rabadan , jhep * 0111 * , 002 ( 2001 ) .i. antoniadis , e. kiritsis , j. rizos and t. n. tomaras , nucl .b * 660 * , 81 ( 2003 ) .a. font , l. e. ibanez , f. quevedo and a. sierra , nucl .b * 331 * , 421 ( 1990 ) . c. coriano and a. e. faraggi , phys .b * 581 * , 99 ( 2004 ) .j. r. ellis , g. k. leontaris , s. lola and d. v. nanopoulos , phys .b * 425 * , 86 ( 1998 ) .j. r. ellis , g. k. leontaris , s. lola and d. v. nanopoulos , eur .j. c * 9 * , 389 ( 1999 ) .j. e. kim , phys .b * 591 * , 119 ( 2004 ) [ arxiv : hep - ph/0403196 ] . earlier work on dynamical symmetry breaking of gauge symmetries includes r. jackiw and k. johnson , phys .d * 8 * , 2386 ( 1973 ) ; j. cornwall and r. norton , _ ibid ._ d * 8 * , 3338 ( 1973 ) ; m. weinstein , phys . rev .d * 7 * , 1854 ( 1973 ) ; s. weinberg , _ ibid . _ d * 13 * , 974 ( 1976 ) .b. holdom , phys .b * 150 * ( 1985 ) 301 ; k yamawaki , m. bando , k. matumoto , phys .* 56 * ( 1986 ) 1335 ; t. appelquist , d. karabali , l.c.r .wijewardhana , phys .* 57 * ( 1986 ) 957 ; t. appelquist and l.c.r .wijewardhana , phys .d * 35 * ( 1987 ) 774 .giudice , r. rattazzi and j.d .wells , nucl .* b544 * ( 1999 ) 3 ; t. han , j.d . jykken and rzhang , phys .* d59 * ( 1999 ) 105006 ; j.l .hewett , hep - ph/9811356 ; e.a .mirabelli , m. perelstein and m.e .peskin , phys .( 1999 ) 2236 ; s. nussinov and r. shrock , phys .* d59 * ( 1999 ) 105002 ; t.g .rizzo , phys .* d59 * ( 1999 ) 115010 ; p. nath and m. yamaguchi , phys .* d60 * ( 1999 ) 116006 ; a. mck , a. pilaftsis and r. rckl , phys .d * 65 * ( 2002 ) 085037 ; hep - ph/0312186 .r. barbieri , p. creminelli and a. strumia , nucl .( 2000 ) 28 ; a. ioannisian and j.w.f .valle , phys .( 2001 ) 073002 ; d.o .caldwell , r.n .mohapatra and s.j .yellin , phys .* d64 * ( 2001 ) 073001 ; k.r .dienes and i. sarcevic , phys .* b500 * ( 2001 ) 133 ; a. de gouva , g.f .giudice , a. strumia and k. tobe , nucl .* b623 * ( 2002 ) 395 .k. hagiwara _ et al ._ [ particle data group collaboration ] , phys . rev .d * 66 * , 010001 ( 2002 ) .h. murayama , hep - ph/0307127 . for more recent ( and more stringent bounds ) see a. de gouva and c. pea - garay , hep - ph/0406301 .equivalent bounds in the atmospheric sector can be found in a. de gouva , nucl .suppl . * 143 * , 167 ( 2005 ) [ arxiv : hep - ph/0408246 ] ; h. minakata , h. nunokawa , w.j.c . teves and r. zukanovich funchal , phys .d * 71 * , 013005 ( 2005 ) [ arxiv : hep - ph/0407326 ] .g. barenboim , l. borissov , j. lykken and a. y. smirnov , jhep * 0210 * , 001 ( 2002 ) [ hep - ph/0108199 ] .a. strumia , phys .b * 539 * , 91 ( 2002 ) [ hep - ph/0201134 ] .g. barenboim , l. borissov and j. lykken , phys .b * 534 * , 106 ( 2002 ) [ hep - ph/0201080 ] .g. barenboim , l. borissov and j. lykken , hep - ph/0212116 .m. c. gonzalez - garcia , m. maltoni and t. schwetz , phys .d * 68 * , 053007 ( 2003 ) [ hep - ph/0306226 ] .v. barger , d. marfatia and k. whisnant , phys .b * 576 * , 303 ( 2003 ) [ hep - ph/0308299 ] .o. w. greenberg , phys .lett . * 89 * , 231602 ( 2002 ) [ hep - ph/0201258 ] .a. de gouva , phys .d * 66 * , 076005 ( 2002 ) [ hep - ph/0204077 ] .a. v. kostelecky and m. mewes , hep - ph/0308300 ; s. choubey and s. f. king , phys .b * 586 * , 353 ( 2004 ) [ hep - ph/0311326 ] .g. barenboim and n. e. mavromatos , hep - ph/0404014 .s. choubey and s. f. king , phys .b * 586 * ( 2004 ) 353 [ arxiv : hep - ph/0311326 ] .[ nutev collaboration ] g. p. zeller _ et al ._ , phys .lett . * 88 * , 091802 ( 2002 ) [ hep - ex/0110059 ] ; phys .d * 65 * , 111103 ( 2002 ) [ hep - ex/0203004 ] ; k. s. mcfarland _et al . _ , hep - ex/0205080 ; g. p. zeller _ et al . _ , hep - ex/0207052 .c. h. llewellyn smith , nucl .b * 228 * , 205 ( 1983 ) .m. s. chanowitz , phys .d * 66 * , 073002 ( 2002 ) [ hep - ph/0207123 ] .the lep collaborations , the lep electroweak working group , and the sld heavy flavor and electroweak groups , cern - ep/2003 - 091 , hep - ex/0312023 .s. davidson , s. forte , p. gambino , n. rius and a. strumia , jhep * 0202 * , 037 ( 2002 ) [ hep - ph/0112302 ] ; s. davidson , hep - ph/0209316 ; p. gambino , hep - ph/0211009 .b. a. dobrescu and r. k. ellis , hep - ph/0310154 .k. p. o. diener , s. dittmaier and w. hollik , hep - ph/0310364 .p. gambino , hep - ph/0311257 .w. loinaz and t. takeuchi , phys .d * 60 * , 115008 ( 1999 ) [ hep - ph/9903362 ] .m. gronau , c. n. leung and j. l. rosner , phys .d * 29 * , 2539 ( 1984 ) ; j. bernabeu , a. santamaria , j. vidal , a. mendez and j. w. valle , phys .b * 187 * , 303 ( 1987 ) ; k. s. babu , j. c. pati and x. zhang , phys .d * 46 * , 2190 ( 1992 ) ; w. j. marciano , phys .d * 60 * , 093006 ( 1999 ) [ hep - ph/9903451 ] ; a. de gouva , g. f. giudice , a. strumia and k. tobe , nucl .b * 623 * , 395 ( 2002 ) [ hep - ph/0107156 ] ; k. s. babu and j. c. pati , hep - ph/0203029 .l. n. chang , d. ng and j. n. ng , phys .d * 50 * , 4589 ( 1994 ) [ hep - ph/9402259 ] ; w. loinaz , n. okamura , t. takeuchi and l. c. r. wijewardhana , phys .d * 67 * , 073012 ( 2003 ) [ hep - ph/0210193 ] ; t. takeuchi , hep - ph/0209109 ; t. takeuchi , w. loinaz , n. okamura and l. c. r. wijewardhana , hep - ph/0304203. w. loinaz , n. okamura , s. rayyan , t. takeuchi and l. c. r. wijewardhana , phys .d * 68 * , 073001 ( 2003 ) [ hep - ph/0304004 ] .m. e. peskin and t. takeuchi , phys .lett . * 65 * , 964 ( 1990 ) ; phys .d * 46 * , 381 ( 1992 ) , j. l. hewett , t. takeuchi and s. thomas , hep - ph/9603391 . s. l. glashow , hep - ph/0301250 .b. w. lee , s. pakvasa , r. e. shrock and h. sugawara , phys .lett . * 38 * , 937 ( 1977 ) .s. ahmad _ et al ._ , phys .d * 38 * ( 1988 ) 2102 ; f. simkovic , v. e. lyubovitskij , t. gutsche , a. faessler and s. kovalenko , phys .b * 544 * , 121 ( 2002 ) [ hep - ph/0112277 ] ; r. kitano , m. koike and y. okada , phys .d * 66 * , 096002 ( 2002 ) [ hep - ph/0203110 ] ; r. kitano , m. koike , s. komine and y. okada , phys .b * 575 * , 300 ( 2003 ) [ hep - ph/0308021 ] .l. willmann _ et al ._ , phys .lett . * 82 * , 49 ( 1999 ) [ hep - ex/9807011 ] .a. halprin , phys .* 48 * ( 1982 ) 1313 ; t. e. clark and s. t. love , mod .a * 19 * , 297 ( 2004 ) [ hep - ph/0307264 ] .w. loinaz , n. okamura , s. rayyan , t. takeuchi and l. c. r. wijewardhana , hep - ph/0403306 . s. ritt [ muegamma collaboration ] , nucl .instrum .a * 494 * ( 2002 ) 520 .see also the meg collaboration website at ` http://meg.web.psi.ch/ ` .j. l. popp [ meco collaboration ] , nucl .instrum .a * 472 * , 354 ( 2000 ) [ hep - ex/0101017 ] ; m. hebert [ meco collaboration ] , nucl .a * 721 * , 461 ( 2003 ) .e. sichtermann [ g-2 collaboration ] , econf * c030626 * , sabt03 ( 2003 ) [ hep - ex/0309008 ] .e. ma and d. p. roy , phys .d * 65 * , 075021 ( 2002 ) [ hep - ph/0111385 ] ; k. s. babu and j. c. pati , phys .d * 68 * , 035004 ( 2003 ) [ hep - ph/0207289 ] ; t. fukuyama , t. kikuchi and n. okada , phys .d * 68 * , 033012 ( 2003 ) [ hep - ph/0304190 ] .p. langacker , aip conf .proc . * 698 * , 1 ( 2004 ) [ hep - ph/0308145 ] .h. abele _ et al ._ , phys .lett . * 88 * , 211801 ( 2002 ) [ hep - ex/0206058 ] ; eur .j. c * 33 * , 1 ( 2004 ) [ hep - ph/0312150 ] .b. tipton _ et al ._ , aip conf .proc . * 539 * , 286 ( 2000 ) .
during 2004 , four divisions of the american physical society commissioned a study of neutrino physics to take stock of where the field is at the moment and where it is going in the near and far future . several working groups looked at various aspects of this vast field . the summary was published as a main report entitled `` the neutrino matrix '' accompanied by short 50 page versions of the report of each working group . theoretical research in this field has been quite extensive and touches many areas and the short 50 page report provided only a brief summary and overview of few of the important points . the theory discussion group felt that it may be of value to the community to publish the entire study as a white paper and the result is the current article . after a brief overview of the present knowledge of neutrino masses and mixing and some popular ways to probe the new physics implied by recent data , the white paper summarizes what can be learned about physics beyond the standard model from the various proposed neutrino experiments . it also comments on the impact of the experiments on our understanding of the origin of the matter - antimatter asymmetry of the universe and the basic nature of neutrino interactions as well as the existence of possible additional neutrinos . extensive references to original literature are provided .
recently , many examples have been found of systems whose innate topology is not homogenous and can rather be described in terms of a scale - free , random structure .examples range from the internet to cellular metabolism networks .the interest of the physics community in this field stems from the fact that the behavior of many systems on such networks or _ graphs _ changes drastically and often attains characteristics close - to but not quite like the mean - field limit . a scale - free graph consists of a set of nodes or vertices and bonds or edges connecting the vertices to a structure .the essential measure of the scales or lack thereof is the connectivity or degree distribution of the nodes : the probability of any node to have edges ( one may distinguish between directed and undirected graphs ; in the former case the in - going and out - going pdf s can differ ) .if this probability follows a power - law behavior , a structure arises that does not have any intrinsic scale .the internet is an example of such a , and several models have been designed that fit the same description .later on enhanced models have been devised to capture the characteristics of more elaborate phenomena , like the tendency of clustering .the models lead to evolving graphs which grow continuously in time by the addition of new nodes , with only a limited number of notable exceptions where the scale - free graph is generated by means of a monte - carlo algorithm .the degree distribution and average connectivity become stationary in the thermodynamic limit , save for the tail of the distribution which is subject to finite - size cutoff effects . a practically minded question in the same spiritis the growth mechanism of the internet .it is a common feature of growing networks that they spontaneously develop degree - degree correlations between adjacent nodes .this is a manifestation of the _ preferential attachment _ principle , where more connected nodes are to attract a larger proportion of new links as the network grows .one recent study hints that the correlation between neighboring node connectivities is the mechanism behind the logarithmic scaling of the _ network diameter _ or the average shortest distance between two randomly chosen vertices , with respect to the system size .the support for the argument is empirical evidence from simulation results of a broader class of scale - free graph ensembles , where a power law growth of the diameter has been indeed identified .the question of viability of logarithmic scaling in real - world networks is particularly essential , since it has an impact on efficiency and percolation issues ( communication over the internet , spreading phenomena , community structures ) . until recently, less attention has been paid to the _ probability distribution _ of shortest path lengths or sometimes referred to as chemical distances in scale - free graphs , possibly owing to the fact that it has been implicitly assumed that the average diameter is an adequate measure of distance properties in the networks .the particular form of the distribution function may have bearings on the performance of search algorithms in scale - free graphs . on the other hand ,the distribution of shortest paths has been analytically calculated for the small - world model , employing the underlying lattice structure and arriving at a gaussian - like distribution for large system sizes . likewise ,a model for deterministic scale - free graphs has been proposed and analyzed lately , where a gaussian is again obtained in the asymptotic limit . in this paperwe focus on a subset of scale - free graphs described by the barabsi - albert model that in addition are _ loopless rooted trees _ from a topological point of view , i.e. the case where one connects new nodes by only one link to the existing structure . by removing the redundancy of interconnecting loops it is possible to consider the distance properties on a mean - field level , and also to analyze `` load '' or `` betweenness '' , the number of any shortest paths passing through vertices .the essential fact here is that the hub of the tree , e.g. the node with the highest connectivity for simplicity , transmits connections between all the branches emanating from it .we show that a stochastic branching process rooted in the preferential attachment rule gives rise to the logarithmic scaling of the diameter and that the pdf of the minimal paths approaches a gaussian . since throughout this text we are interested in tree structures , it is useful to overview their basic features . in the context of random networks ,one often resorts to ref . and the derivation therein , which suggests that the diameter of graphs grows logarithmically .although the calculations there are performed for random graphs containing loops , the result obtained closely resembles that for _ balanced cayley trees _ with uniform coordination numbers ( except for the coordination number of the central node , which is different ) . according to this, the number of nodes separated from node zero by nonrecurring steps goes as where is the coordination number for the cayley tree .it then follows simply from the sum of a geometric series that both the longest distance between nodes and the average distance should behave as .it is obvious that trees have unique shortest paths between any two nodes in the sense that without traversing the same edge twice it is not possible to find an alternate minimal route ( unlike in unweighted graphs with loops , where there is usually more than one minimal path ) .we can then define one of the nodes as the _ root _ of the tree and unambiguously arrange all the other nodes into _ layers _ depending on their minimal distance to the root .finding the shortest path between two chosen nodes is nothing but identifying the deepest common node along the paths leading from the root to the source and target vertices and then connecting the two nodes via this common fork .notice that the choice of the root here is slightly arbitrary ; one would prefer to use balanced trees .we study scale - free barabsi - albert trees , starting with a single vertex . in each timestep then we add a new vertex with _ only one _ outgoing edge .the other end of the edge is connected to one of the nodes already present in the system with a connection probability proportional to the connectivity or degree of a particular node .all edges are thought of as bidirectional and having the same weight , namely 1 . as a slight modification to the original model ,the connection symmetry of the first two nodes is broken by introducing a `` virtual '' edge to the very first vertex which only gives preference for this node over the second one when it comes to the subsequent addition of further nodes .this way , we can automatically identify the most connected node in the network and call it its root . to have a balanced treeone needs every subtree of the root to have the same number of nodes in the configurational average .this is only attained when the root is the most connected node , since the ba model ensures that the order of the nodes in terms of connectivity does not change in the course of addition of new nodes and is fully determined by the time of introduction of a node .to begin with , we will investigate the shortest path distribution in a mean - field model of a tree network , between the root of the tree and all the other nodes .this argument extends also to _ general graphs _ in the case that the new nodes added ( e.g. barabsi - albert networks ) do not cause a significant amount of shortcuts between already existing nodes .let us consider a uniform branching process for each of the layers in the tree so that every node on a certain layer has the same number of offsprings to produce the next layer beneath ; it shall amount to for layer for short . this way the original stochastic model is approximated by a deterministic graph .the number of nodes with a separation from the root is then with the condition that .the actual form of can be obtained by making use of the preferential attachment rule for ba networks . according to this ,the probability that a newly introduced node will connect to any given set of nodes is proportional to the cumulative connectivity of the set in question .thus , the number of nodes on layer changes according to the following rate equation , due to the addition of a new node : since the right hand side describes the attachment probability to layer , where is the number of nodes in the system and is the normalization factor for the connectivity . writing , expanding the derivation , and dividing by give if we substitute by explicitly indicating the size dependence on and assume that is a slowly varying function , it is straightforward to expect a solution in the decomposed form of : and since the left hand side is a function of only , with a constant . finally , we get and this relation does not apply to the root ( ) for obvious reasons .( [ mf_branching ] ) also implies that the number of nodes with a given distance to the root keeps growing with until drops below and then starts to decrease , as the bottom of the tree is approached ( ) .this is in strong contrast to the formal prediction of a constant branching for _ any _ random graph , which would result in a monotonous exponential growth , as would be the case in usual cayley trees .using the recursion relation for the number of nodes on a given level , we can give an estimate for the shortest path distribution function with the source of the paths at the root of the tree . instead of eq .( [ mf_branching ] ) we take now the more general form of and approximate the sum with an integral in the following expression , which reads \approx { } } \nonumber \\ & & { } \approx b(0 ) a^{\lambda ( l-1 ) } \exp \left ( -\lambda \int_{1}^{l-1 } \ln x \ , dx \right ) = { } \nonumber \\ & & { } = \frac{b(0)}{e^\lambda } \left ( \frac{a e}{l-1 } \right ) ^{\lambda ( l-1)}. { } \label{mf_n_l}\end{aligned}\ ] ] the result above for approaches a non - normalized gaussian in the large- limit as goes to infinity , which can be seen from fig . [ nl_gauss_match ] where a corresponding to a very large network has been used . in order to draw further conclusions, we will determine the parameters of the gaussian that give a best fit to .for the sake of simplicity , let us now consider the function of the form : . \label{nl_approx}\end{aligned}\ ] ] we first match the extremal point of to the mean of the gaussian , resulting in .the maximum value is thus ; the standard deviation can be obtained by the requirement that the derivative functions of and the gaussian be the same in the vicinity of up to first order , giving . using the parameters acquired this way, we can find a very good approximation to , which is almost identical to that of a least - square fit . with .a few points of the gaussian are represented by the dots .the difference is only noticeable at the tails of the functions .the inset shows how the quadratic error of the two functions ( normalized for area ) appears to be a decreasing power law with increasing . ] with .a few points of the gaussian are represented by the dots .the difference is only noticeable at the tails of the functions .the inset shows how the quadratic error of the two functions ( normalized for area ) appears to be a decreasing power law with increasing . ]furthermore , additional information can be gained if we look into the normalization conditions for .trivially , the sum of over all layers should return the total number of nodes in the system , .again , we approximate the sum with an integral : \ , dx \approx { } \nonumber\\ { } & \approx & \frac{b(0)}{e ^ \lambda } e^{a \lambda } \sqrt{\frac{2 \pi a}{\lambda } } , \label{n_nl_expr}\end{aligned}\ ] ] where we assumed that is large enough so that we can neglect the correction of the error function to the gaussian integral. we should also be aware that has a finite cutoff because of the bounded depth of the tree yet, the quickly vanishing makes it possible to take to infinity .finally , as so that the integrand is bounded everywhere .recall now that the degree of a node in ba networks grows with the power of , .apart from , the only term on the right hand side of eq .( [ n_nl_expr ] ) that may contribute to the overall linear growth in is , which increases much faster than , so the latter can be taken a constant .the consistency condition with the left hand side requires that should hold , and thus disregarding the constant , we end up with a very similar but more general expression than that of eq .( [ mf_branching ] ) for : this implies that if a scale - free tree is characterized by a branching process decaying as a power law as a function of the distance from a suitable root node with the highest connectivity , the relation ( [ exp_relation ] ) should necessarily be satisfied . not surprisingly , it is true in the case of ba trees , where and according to eq .( [ mf_branching ] ) , and .one should note that in the process of constructing the mapping we rely on the fact that the number of nodes in a layer depends only on the _ average branching ratio _ ; the fluctuations in the degrees of nodes is omitted .for this reason the degree distribution exponent is not present in the tree representation .the node - to - node distances in the mean - field model are calculated as follows .we traverse each node of the tree and enumerate the routes with certain lengths that start at or go through this node and have both of their ends in the subtree of the node . practically speaking, we can think of this node as the root for its subtree and perform the same calculations as we would do for the global root of the tree .if we by denote the number of possible paths going out to the subtree of a node on level that have length and one end fixed at the node on level , let now be the number of all routes that go _ through _ or end at a particular node on level and have a length of , the second term in the sum has contribution to only when there are branches left going out from a node , in average when .the number of paths with a specific length in the whole system is therefore where is defined as earlier in eq .( [ mf_n_l ] ) , the number of nodes on a given level .the barabsi - albert model allows for more rigorous derivations of the relation for .the mathematics community often refers to the tree interpretation of the model as _ recursive trees _ , andthus exact results have been obtained for both the distance distribution and the diameter of the trees .bollobs and riordan give a general proof for the diameter scaling of scale - free ba graphs .the mapping to cayley trees also resembles the work of krapivsky and redner , who arrive at a closed recursive analytical form for , in a more general context than that of scale - free trees .it also resembles cayley models of internet traceroutes by caldarelli and co - workers . .the exponent is very close to .the inset shows plotted against the normalized minimal distance .the systems range from to nodes in size with logarithmic increments .the number of iterations for the systems go from to 100 , depending on the size , . ] .the exponent is very close to .the inset shows plotted against the normalized minimal distance .the systems range from to nodes in size with logarithmic increments .the number of iterations for the systems go from to 100 , depending on the size , . ]numerical simulations of ba scale - free trees fully confirm the inferences drawn in the preceding section .most important of all , the average number of branches per node on a given level is shown in fig .[ bratio ] .the numerical parameters of the power - law fit conform with the mean - field values : the exponent of the decay is almost exactly , and the prefactor of the logarithm with is also close to that of the predicted .it is also worth to note that if we rescale the distance variable by the logarithm of the system size , we can attain a data collapse with a very good accuracy .this means that for ba trees in practice can be approximated as from the inset of fig .[ bratio ] it is also apparent that the cutoff is a little over the value of , by a factor of about . on the other hand , the drop of at measured to be either an exponential or a power - law with a very large exponent .the mean - field prediction for the maximum of the shortest path length , , can be obtained by equating in eq .( [ mf_n_l ] ) and using the gaussian approximation of eq .( [ nl_approx ] ) .the solution up to first order in is that , which again is in reasonable agreement with the mean - field argument .the derived quantities and the node - to - node distance distribution is shown in fig . [ l_pl ] for two distinct cases : * the root - to - node and node - to - node shortest path distribution is measured in an ensemble of random ba trees using simulations . instead of every possible pair , the node - to - node distances are measured only between a large but finite number of randomly selected vertex pairs , for practical reasons ; * both distribution functions are estimated by utilizing the mean - field tree mapping , using the asymptotic form of eq .( [ bl_estimate ] ) for with a cutoff at . ) ([cumulative_paths ] ) .trees of nodes are measured and averaged over 100 realizations .the dashed lines show the least - square fits with the function \,^{l-1}$ ] to the measured data points .the constant for root - to - node distances is and for node - to - node distances . is in a very good correspondence with the analytical value of and . ]it is to be seen that a very good correspondence is found between the root - node distribution functions , but the overall two - point pdf s are sensibly close as well . while it has been relatively easy to derive analytical results for the root - node distances in the mean - field trees , eq .( [ cumulative_paths ] ) and the quantities it is constructed of turn out to be too complex to handle without numerics . the formulas ( [ subtree_leaves])([cumulative_paths ] ) above are used to calculate the approximate values of the node - to - node path length distribution in the mean - field trees using the expression of eq .( [ bl_estimate ] ) instead of the analytical form of eq .( [ mf_branching ] ) , so as to better represent the random ba trees .it is reassuring that the generic form of the node - to - node distance pdf also follows a function , only with a different constant from that of for the root - node distances [ eq . ( [ mf_n_l])([nl_approx ] ) ] ; see fig . [ l_pl ] .the diameter of the trees relative to the logarithm of the system size can be seen on fig .[ diameter ] . or in other words twice the mean of root - to - node distances .this is somewhat expected as the main contribution to the node - to - node paths arises from passing through the root , for large graphs .it leads to a convolution - type distribution ( from the two legs ) .it can easily be seen that the diameter can not exceed twice the depth of the tree , which gives rise to a logarithmic growth in any case . , and the prefactors of and are in very good agreement with their respective analytical values . ]on a hierarchical structure the total number of minimum paths going through a node ( the `` load '' ) can be divided into two contributions .first , those paths that connect nodes in separate sub - branches of the node to each other , and , second , those that connect the nodes belonging to the branches to the rest of the tree . call the number of the descendants of a node on level . in other words, is the size of the subtree for the node .then the load can be written simply as ^2 + d(l ) \times ( n - d(l ) ) , \label{load_on_tree}\ ] ] where the last term counts the connections towards the hub .for the particular example we are concerned with , it is easy to see that the latter term dominates ( ) and moreover that a good approximation is given by just simply . thus one may investigate the dependence of the load on the level ( or depth ) of the tree , .for in the mean - field picture one has that and for the layer immediately below where we also used the recursion relation for .finally , the load changes for the layer underneath as since the load is the same for each of the nodes on a particular level , the distance load distribution is directly given by the normalized function , thus hiding the implicit dependence on .considering that and therefore we then expect to see that the load is inversely proportional to the number of nodes on the levels , which is indeed the case according to fig. [ load ] .the same result holds for normal cayley trees from eq .( [ load_on_tree ] ) . is proportional to , the number of nodes on the levels of the tree .the load on the root has not been showed since it does not average .the inset shows the load distribution for a usual cayley tree with a coordination number .the bold lines indicate power law fits , which give exponents of and for the ba and the cayley trees , respectively .the mean - field tree is a mapping of a random ba tree with nodes . ]is proportional to , the number of nodes on the levels of the tree .the load on the root has not been showed since it does not average .the inset shows the load distribution for a usual cayley tree with a coordination number .the bold lines indicate power law fits , which give exponents of and for the ba and the cayley trees , respectively .the mean - field tree is a mapping of a random ba tree with nodes . ]note that we have to use mean - field trees which would correspond to random networks with a large number of nodes so that the number of levels is of the order of 10 . for the load distributionwe consider only levels for which because otherwise subtrees do not exist in the average sense .it is surprising that the load distribution exponent does not depend on the actual form of , being universally [ eq .( [ load_n_relations ] ) ] . indeed , _the exponent of the distance load pdf is independent of the choice of the node that all the distances are taken relative to_. another common way of defining the importance of the nodes in terms of shortest paths passing through them is the one called betweenness , favorable for its algorithmic feasibility and simplicity .newman presents a breadth first search algorithm for efficient calculation of the betweenness of nodes on random graphs . the only notable difference to eq .( [ load_on_tree ] ) comes from the fact that the betweenness also accounts for paths that originate from the nodes themselves , which nevertheless amounts only to a constant system size .are taken with 100 realizations .the root which descendants are defined down from is always the initial node .the prediction of eq .( [ betweenness_approx ] ) is represented by the solid line . ]we will calculate the betweenness on the trees , now focusing as a goal on the probability distribution of the load .an estimation can be given for a node by considering the contributions to it , and by separating the network to a descendants part with nodes in the branches and all the rest with nodes .the node being the source , we have shortest paths to any other node ; if the source is among the descendants , we have ones going through ; if the source is any other node from the network , we have .a fourth contribution , coming from paths passing through the node but having both ends in the descendant tree , has been neglected .they add up to an estimated betweenness of here it is to be seen that for small s the linear term dominates , just as in our previous load calculation ; fig .[ betweenness_desc ] justifies our estimations . the betweenness probability distribution taken over all nodes in the networkcan then be concluded to asymptotically follow a power law decay with a universal exponent of .this is since is linear in the number of descendants and moreover that the pdf of scales universally with an exponent of for supercritical trees . strictly speaking , the conclusions here are only true for the supercritical part of the tree , i.e. where . the subcritical leaves of the tree have an increasingly smaller number of descendants , though , which drop exponentially with each new layer , and it can be verified that the descendant pdf decay exponent is indeed above 2 if only this part of the tree is considered .nevertheless , fig .[ betweenness_distrib ] shows that both the descendant pdf and the load pdf are accurately described by inverse square functions .a scaling of the load distribution has been experimentally found on other scale - free networks as well , only with a slightly different universal exponent of about .the exponent of the power law shown is .the inset displays the logarithmically binned pdf of the descendants for systems with nodes .its power law exponent is as well . ]the exponent of the power law shown is .the inset displays the logarithmically binned pdf of the descendants for systems with nodes .its power law exponent is as well . ]a further , practically more far - reaching observation is that the average betweenness as measured as a function of the locally known node degree grows as a power - law of the degree with an exponent of about ( fig .[ betweenness_func ] ) . a mean - field approach can be used to estimate for the exponent , though , if we consider that the preferential attachment principle for large degrees gives rise to a descendant degree scaling of ( ) , which is the inverted relation for the time evolution of the degree of a parent node . in this particular case ,time is measured as the size of the node s subtree .a substitution of the latter into the linear load equation would suggest an exponent of ; the deviation from it may come from the rather restricted range of the degree that the relatively small system sizes allow .the fit of a power law indicates an exponent of about .the inset shows the average number of descendants versus the degree for systems with nodes , for which a power - law fit gives . ]the fit of a power law indicates an exponent of about .the inset shows the average number of descendants versus the degree for systems with nodes , for which a power - law fit gives . ]in this paper we have mapped scale - free barabsi - albert trees to a deterministic model of a rooted tree with a uniform branching process on each layer of the tree . this idea resembles work on the internet structure and on the structure of branched cracks , where an inverse relation of the branching to distance has been observed .simulations show that the distribution of the number of branches on one particular layer of the tree follows a power - law function , but it turns out to be a good approximation to describe the branching only by its mean , . in the simple case of ba trees it can be shown by means of this mapping that the diameter of the networks is bounded by the logarithm of the network size and the asymptotic form of the distance distribution functions follows immediately .in other words , we can examine the slow convergence of this function to the limiting gaussian form for infinite system sizes .given an effective description in terms of a tree plus a branching process , further information can be found , e.g. one may consider the scaling of the number of shortest - distance paths ( load , or betweenness ) .non - uniform critical trees could perhaps be constructed in a self - organized fashion , as is possible for the statistically uniform case .one should note the close relation of the cayley representation to minimal spanning trees ( mst ) on scale - free ( random ) networks ; for the -networks these two coincide .this makes it an interesting prospect to study the load and distance properties of mst s in other scale - free networks .
the average node - to - node distance of scale - free graphs depends logarithmically on , the number of nodes , while the probability distribution function ( pdf ) of the distances may take various forms . here we analyze these by considering mean - field arguments and by mapping the case of the barabsi - albert model into a tree with a depth - dependent branching ratio . this shows the origins of the average distance scaling and allows a demonstration of why the distribution approaches a gaussian in the limit of large . the _ load _ , the number of shortest distance paths passing through any node , is discussed in the tree presentation .
theoretical studies of the transition mechanism and rate of thermally activated events involving rearrangements of atoms often involve finding the minimum energy path ( mep ) for the system to advance from the initial state minimum to the final state minimum on the energy surface .the estimation of the transition rate within harmonic transition state theory requires finding the highest energy along the mep , which represents a first order saddle point on the energy surface . while it is possible to use various methods to converge directly onto a saddle point starting from some initial guess , methods that produce the whole minimum energy path are useful because one needs to ensure that the highest saddle point for the full transition has been found , not just some saddle point .furthermore , calculations of meps often reveal unexpected intermediate minima . the nudged elastic band ( neb ) methodis frequently used to find meps . there , the path is represented by a discrete set of configurations of all the atoms in the system .some examples of complex meps calculated using neb can , for example , be found in refs. .the method is iterative and requires as input some initial path which is then refined to bring the configurations to the mep .harmonic spring forces are included between adjacent images to distribute them along the path . if all the spring constants are chosen to be the same , the images will be equally distributed .other choices can be made , for example higher density of images in the higher energy regions .a force projection , the nudging , is used to remove the component of the true force parallel to the path and the perpendicular component of the spring force .the number of iterations required can strongly depend on how close the initial path is to the mep .an important aspect of the method is that no knowledge of the transition mechanism is needed , only the endpoints corresponding to the initial and final states .if more than one mep exists between the given endpoints , the neb will most likely converge on the one closest to the initial guess .the initial path can , therefore , in some cases affect the results obtained with the neb method .the initial path in an neb calculation has so far typically been constructed by a linear interpolation ( li ) of the cartesian coordinates between the initial and final state minima .the method is often used for systems subject to periodic boundary conditions were internal coordinates are not practical .such paths can , however , involve images where atoms come much too close together , leading to large atomic forces and even divergence in electronic structure calculations .one simple way of avoiding such problems is to check interatomic distances in each image and move atoms apart if the distance between them is shorter than a cutoff value .other methods for finding meps have also been devised where paths are constructed in a sequential manner , adding one image after another with some type of extrapolation and relaxation each time a new image is added . here , we present a method for generating paths which are better suited as initial paths in neb calculations than li paths .it is inspired by one of the earliest approaches for calculating reaction paths , the one presented by halgren and lipscomb .their method involved two steps .first , a linear synchronous transit ( lst ) pathway was constructed so as to make pair distances change gradually along the path ( see below ) and then an optimization procedure , the quadratic synchronous transit , was carried out to further refine the path . in the method presented here ,the basic idea of lst is used to generate an improved initial guess for the neb method , but the procedure is different from the one used by halgren and lipscomb , as explained below .the article is organized as follows : in the following section , the li and lst methods are reviewed and the new method presented . in section [ sec : applications ] , three applications are presented , ( 1 ) rotation of a methyl group in an ethane molecule , ( 2 ) an exchange of atoms in an island on a crystal surface , and ( 3 ) an exchange of two si - atoms in amorphous silicon . the article concludes with a summary in section [ sec : conclusions ] .the neb method involves finding a discrete representation of the mep .first , the atomic coordinates at the two endpoints , i.e. the coordinates of the atoms at the energy minima representing initial and final states of the transition , and , are used to generate an initial path . here, will denote the vector of coordinates of the atoms in a given configuration , .typically , a linear interpolation of the cartesian coordinates of the two endpoint configurations is used as a starting guess .given that intermediate discretization points , here referred to as ` images ' of the system , will be used , the li path which so far has been most commonly used as initial path in neb calculations , is given by here , denotes the coordinates of atom and the index denotes the image number in the path and runs from to . in an neb calculation , a minimization procedureis then carried out to adjust the coordinates of the intermediate images until they lie on the mep , while the endpoint images are kept fixed . as mentioned above, there can be problems starting a calculation from an li path , especially when electronic structure calculations are used to evaluate the energy and atomic forces , since atoms can land too close to each other , leading to large atomic forces or even convergence problems in the electronic self - consistency iterations . if two atoms are too close and need to be moved apart in an image , , or if one chooses to make use of some knowledge of a reasonable intermediate configuration , , then the initial path for the neb can be constructed by first creating a linear interpolation from to and then from to .but , it is better to have an automatic way , as presented in the following section , of creating a path where pair distances are automatically physically reasonable , and where the initial path is more likely to lie closer to the mep than a linear interpolation , thereby reducing the number of iterations needed to reach convergence .following the first step in the two step procedure presented by halgren and lipscomb , an interpolation of all pair distances between atoms is carried out for each of the intermediate images along the path .these pair distances provide target values which the initial path is then made to match as closely as possible . the interpolated distance between atoms and in image is where with and , is the distance between atoms and in a given configuration of the atoms . the li path and the interpolation of pair distances are illustrated in fig. 1 .since there are many more atom pair distances than atomic degrees of freedom , vs. , the interpolated values of the atom coordinates can not satisfy the constraints rigorously and a compromise needs to be made . an objective function can be defined for each image by summing the squared deviation of pair distances from the target values here , is a weight function which can be used to place more emphasis on short distances , since the energy of an atomic system rises strongly when two atoms come too close together .the function defines an objective function for each image which has the form of a pairwise interaction potential that directs the atom coordinates to a configuration where the distances between atoms are close to the interpolated distances .one can think of , as defining an effective ` energy surface ' and use the neb method to find the optimal path on the surface .the force acting on atom in image is then obtained as after applying the neb iterative minimization with all spring constants chosen to be equal , a path with even distribution of the images is obtained where atom pair distances are changing gradually from one image to another .we will refer to this as the idpp path . as we illustrate below , with three example calculations , the idpp path is closer to the mep on the true energy surface than the li path . by using the idpp path as an initial path for neb calculations using atomic forces obtained by density functional theory ( dft ) ,the number of iterations needed to reach convergence was , in the cases studied here , significantly smaller than when the li path was used as the initial path .one good aspect of the idpp method is that it does not require a special coordinate system , such as internal coordinates ( e.g. , bond distances and angles ) and , therefore , has the advantage of being easily applicable to any system , including systems subject to periodic boundary conditions .we now compare this procedure to what halgren and lipscomb named the lst path . there ,an objective function was defined as with the parameter chosen to be in atomic units . the weight function in chosen to be .this choice for the function places greater weight on short distances , which are more important for the energetics .we have chosen the same weight function here in our calculations of the idpp paths . the second term on the right hand side of eqn.([equ : lst ] ) was added to remove uniform translation and help make the path continuous .( the are given by eqn.([equ : linearinter ] ) ) .it is , however , quite arbitrary and does not necessarily result in a continuous path with an even distribution of the images , as illustrated below .the atom coordinates , , in each image , , were chosen so as to minimize , in a least squares procedure .for illustration purposes , the method was applied to transitions in three different systems : rotation of a methyl group in ethane , interchange of atoms in a heptamer island on a surface and interchange of atoms in amorphous silicon . in all casesthe initial path was generated using the atk software .once the initial path had been constructed , the neb method was applied to find the mep .the iterative neb minimizations using velocity projections ( ` quick - min ' ) were carried out until the maximum force on each atom in any of the images had dropped below ev / and then the climbing - image neb was used with a conjugate gradient minimization algorithm until the maximum force dropped below ev / .a tolerance of ev / is typically sufficient to get a good estimate of the path .the calculations of the condensed phase systems were carried out using vasp , pbe functional and paw , with the tst tools .the heptamer island system consisted of a slab of 3 layers , each with 36 atoms and the calculation included ( 3x3x1 ) k - points and an energy cutoff of 270 ev . the calculation of the amorphous si involved 214 atoms including only the gamma point and an energy cutoff of 245 ev .the methyl rotation was calculated using the atk - dft software , pbe functional and linear combination of atomic orbitals ( lcao ) .the first application is rotation of a methyl group in an ethane molecule .this simple example illustrates well the difference between li and idpp paths , which are shown in fig . 2 with 5 intermediate images .the constraint on the pairwise distances in the construction using the idpp path leads to a simple rotation of the methyl group , while the li method gives a path with significant variations in the c - h bond lengths .the convergence in a subsequent neb calculation starting from the idpp path and using atomic forces from an atk - dft calculation required about a third as many atomic iterations and scf iterations to reach convergence as compared with a calculation starting with the li path , see table 1 .the second application is an interchange of atoms in a heptamer island of six al - atoms and one ni - atom sitting on an al(111 ) surface .the transition involves the concerted movement of the ni - atom from a rim site to the central site of the island while the al - atom initially in the center takes the rim site .an li path with an odd number of images would have direct overlap of the two atoms in the image at the middle of the path resulting in divergence of the subsequent dft calculation . in order to avoidthat , the ni - atom was moved out along the surface normal by 0.2 in both the initial and final state as 7 intermediate images in the li path were generated , see fig .the neb calculation was then started from a path where the ni - atom was placed in the right position in the initial and final states but intermediate images were taken from generated the li path . despite these adjustments , the neb calculation required a large number of iterations , see table 1 . a path constructed using the lst method of hallgren and lipscomb was also created and is shown in fig .a discontinuity in the path is evident from the figure because the constraint given by the second term in eqn.([equ : lst ] ) is too weak . the idpp path is shown in fig .it has an even distribution of the images along the path , which otherwise is similar to the lst path .the final position of the images after neb relaxation using forces obtained by dft is shown in fig .the displacements of the exchanging atoms are substantially larger than in the idpp path , and neighboring atoms in the island turn out to undergo large displacement in the intermediate images , which is missing in the idpp because their position is nearly the same in the initial and final state .the neb / dft calculation started from the idpp path required half as many iterations as the calculation started from the li path .more importantly , the number of electronic iterations , which is proportional to the cpu time , was an order of magnitude smaller when the neb / dft calculation is carried out starting from the idpp path , see table 1 .the energy along the minimum energy path is shown in fig .interestingly , an intermediate minimum is identified by the neb calculation where the ni - atom has moved on - top of the cluster while the al - atom has pushed two of the rim atoms away from the center of the island .such intermediate minima are often found in neb calculations and the resulting mep then has more than one maximum .this illustrates the importance of calculating a full mep for the transition rather than just finding a first order saddle point .if a saddle point search is carried out starting from the state , the lower saddle point will most likely be found and the activation energy underestimated unless a further exploration of the mep is carried out .ni heptamer island on an al(111 ) surface .( a ) li path created by linear interpolation of cartesian coordinates of the initial and final states . here , the ni - atom as been displaced outwards along the surface normal by 0.2 in both the initial and final states to avoid a direct overlap of atoms in the middle image and subsequent divergence of the dft method .( b ) lst path generated using the method of halgren and lipscomb .while the path avoids the direct overlap of atoms , the path is discontinuous with a large gap between two of the intermediate images .( c ) idpp path generated by an neb calculation on the object function surface given by eqn .[ equ : idpp ] .the path is continuous and has an equal spacing of the images .( d ) the minimum energy path found after neb relaxation starting from any of the three paths shown in ( a - c ) using atomic forces obtained from dft ., scaledwidth=70.0% ] ni heptamer island on an al(111 ) surface , obtained using the neb method and dft calculations of the atomic forces . ]0.4 true cm .computational effort in converging to the minimum energy path starting from different initial paths : the linear interpolation of cartesian coordinates ( li ) and the image dependent pair potential ( idpp ) method presented here . the effort is reported as the number of electronic ( scf ) iterations needed as well as the number of atomic displacement ( ad ) iterations . in example 2 , the ni - adatom was moved outwards along the surface normal by 0.2 in the intermediate images of the li path to prevent the dft calculation from diverging . [ cols="<,^,^,^,^,^,^",options="header " , ] in a third example , a calculation was carried out of a transition where two si - atoms in amorphous silicon change places in a concerted way . the sample which contained 214 si - atomswas created by cooling a liquid and then annealing .the path was calculated using a large number of intermediate images , p-1=15 , to obtain a good resolution of this rather complex path .since the two si - atoms are changing places , the linear interpolation results in a small distance between the two atoms in the middle image . as a result ,the energy obtained in the dft calculations in the first few atom iterations is large and requires many electronic iterations .since number of images in the path is large , the average number of electronic iterations is only 50% larger when staring form the li path as compared to the calculation starting from the idpp path , see table 1 .but , the time required for the parallel neb calculation is even longer because it is held up by this one , troublesome image ( figs . 5 and 6 ) .the idpp method presented here is a robust and simple method for generating the input for neb calculations of minimum energy paths .it can save considerable amount of computer time as compared with a linear interpolation between initial and final states using cartesian coordinates .it can , furthermore , help avoid divergence in electronic structure calculations which can occur when the linear interpolation brings atoms too close together in an intermediate image of the initial path .the method is similar to the lst method of hallgren and lipscomb , but is guaranteed to produce a continuous path as it makes use of the neb method on the objective function surface generated by interpolation of pair distances .the method could also be used to generate input for other path calculations , such as free energy paths where sampling of system configurations in hyperplanes is carried out either classically or quantum mechanically and for transitions between magnetic states where the orientation of magnetic moments changes .further development of the method could involve testing of other choices for the weight function , which was simply taken here to be the one used by hallgren and lipscomb .also , testing in combination with other optimization methods in the neb calculation , such as bfgs , would be useful .since neb calculations using _ ab initio _ or dft evaluation of the atomic forces are computationally intensive , exploration of these and other ways of reducing the computational effort would be worthwhile .this work was supported by quantumwise a / s , eurostars project e6935 ( atommodel ) and the icelandic research fund .the calculations of the 214 si - atom system were carried out using the nordic high performance computing ( nhpc ) facility in iceland .h. jnsson , g. mills and k. w. jacobsen , _ classical and quantum dynamics in condensed phase simulations _ , edited by b. j. berne , g. ciccotti , and d. f. coker ( world scientific , singapore , 1998 ) , p. 385 .a. behn , p. m. zimmerman , a. t. bell and m. head - gordon , _theory comput . _* 7 * , 4019 ( 2011 ) .t. a. halgren and w. n. lipscomb , _ chem ._ , * 49 * , 225 ( 1977 ) .we note that the phrase ` linear synchronous transit ' has later been used to refer to paths constructed in quite different ways .g. k. schenter , g. mills , and h. jnsson , _ j. chem .phys . _ * 101 * , 8964 ( 1994 ) ; g. mills , g. k. schenter , d. makarov and h. jnsson , _ chem . phys. letters _ * 278 * , 91 ( 1997 ) ; g. h. jhannesson and h. jnsson , _ j. chem . phys . _ * 115 * , 9644 ( 2001 ) .
a method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy . an objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential ( idpp ) surface . this provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained , for example , by _ ab initio _ or density functional theory . the optimal path on the idpp surface is significantly closer to a minimum energy path than a linear interpolation of the cartesian coordinates and , therefore , reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path . the method is illustrated with three examples : ( 1 ) rotation of a methyl group in an ethane molecule , ( 2 ) an exchange of atoms in an island on a crystal surface , and ( 3 ) an exchange of two si - atoms in amorphous silicon . in all three cases , the computational effort in finding the minimum energy path with dft was reduced by a factor ranging from 50% to an order of magnitude by using an idpp path as the initial path . the time required for parallel computations was reduced even more because of load imbalance when linear interpolation of cartesian coordinates was used .
semiparametric models are statistical models indexed by both a finite dimensional parameter of interest and an infinite dimensional nuisance parameter .the profile likelihood is typically defined as where is the likelihood of the semiparametric model given observations and is the parameter space for .we also define the convergence rate of the nuisance parameter is the order of , where is some metric on , is any sequence satisfying , and is the true value of .typically , where is the euclidean norm and .of course , a smaller value of leads to a slower convergence rate of the nuisance parameter .for instance , the nuisance parameter in the cox proportional hazards model with right censored data , the cumulative hazard function , has the parametric rate , i.e. , .if current status data is applied to the cox model instead , then the convergence rate will be slower , with , due to the loss of information provided by this kind of data .the profile sampler is the procedure of sampling from the posterior of the profile likelihood in order to estimate and draw inference on the parametric component in a semiparametric model , where the profiling is done over the possibly infinite - dimensional nuisance parameter . show that the profile sampler gives a first order correct approximation to the maximum likelihood estimator and consistent estimation of the efficient fisher information for even when the nuisance parameter is not estimable at the rate .another bayesian procedure employed to do semiparametric estimation is considered in who study the marginal semiparametric posterior distribution for a parameter of interest . in particular , that marginal semiparametric posterior distributions are asymptotically normal and centered at the corresponding maximum likelihood estimates or posterior means , with covariance matrix equal to the inverse of the fisher information .unfortunately , this fully bayesian method requires specification of a prior on , which is quite challenging since for some models there is no direct extension of the concept of a lebesgue dominating measure for the infinite - dimensional parameter set involved .the advantages of the profile sampler for estimating compared to other methods is discussed extensively in , and . in many semiparametric models involving a smooth nuisance parameter ,it is often convenient and beneficial to perform estimation using penalization .one motivation for this is that , in the absence of any restrictions on the form of the function , maximum likelihood estimation for some semiparametric models leads to over - fitting .seminal applications of penalized maximum likelihood estimation include estimation of a probability density function in and nonparametric linear regression in .note that penalized likelihood is a special case of penalized quasi - likelihood studied in .under certain reasonable regularity conditions , penalized semiparametric log - likelihood estimation can yield fully efficient estimates for ( see , for example , ) .as far as we are aware , the only general procedure for inference for in this context known to be theoretically valid is a weighted bootstrap with bounded random weights ( see ) .it is even unclear whether the usual nonparametric bootstrap will work in this context when the nuisance parameter has a convergence rate .in contrast , and have shown that the profile sampler procedure without penalization can essentially yield second order frequentist valid inference for in semiparametric models , where the estimation accuracy is dependent on the convergence rate of the nuisance parameter .in other words , a faster convergence rate of the nuisance parameters can yield more precise frequentist inference for .these second order results are verified in and for several examples , including the cox model for both right censored and current status data , the proportional odds model , case - control studies with missing covariates , and the partly linear normal model .the convergence rates for these models range from the parametric to the cubic .the work in has shown clearly that the accuracy of the inference for based on the profile sampler method is intrinsically determined by the semiparametric model specifications through its entropy number .the purpose of this paper is to ask the somewhat natural question : does sampling from a profiled penalized log - likelihood ( which process we refer hereafter to as the penalized profile sampler ) yield first and even second order accurate frequentist inference ?the conclusion of this paper is that the answer is yes and , moreover , the accuracy of the inference depends in a fairly simple way on the size of the smoothing parameter .the unknown parameters in the semiparametric models we study in this paper includes , which we assume belongs to some compact set , and , which we assume to be a function in the sobolev class of functions supported on some compact set on the real line , whose -th derivative exists and is absolutely continuous with , where here is a fixed , positive integer and is the -th derivative of with respect to .obviously is some measurement of complexity of .we denote as the sobolev function class with degree .the penalized log - likelihood in this context is : where , is the log - likelihood of the single observation , and is a smoothing parameter , possibly dependent on data . in practice , can be obtained by cross - validation or by inspecting the various curves for different values of . the penalized maximum likelihood estimators and depend on the choice of the smoothing parameter .consequently we use the notation and for the remainder of this paper to denote the estimators obtained from maximizing ( [ penlik ] ) .in particular , a larger smoothing parameter usually leads to a less rough penalized estimator of . for the purpose of establishing first order accuracy of inference for based on the penalized profile sampler ,we assume that the bounds for the smoothing parameter are in the form below : the condition ( [ smooth ] ) is assumed to hold throughout this paper .one way to ensure ( [ smooth ] ) in practice is simply to set .or we can just choose which is independent of .it turns out that the upper bound guarantees that is -consistent , while the lower bound controls the penalized nuisance parameter estimator convergence rate .another approach to controlling estimators is to use sieve estimates with assumptions on the derivatives ( see ) .we will not pursue this further here .the log - profile penalized likelihood is defined as follows : where is for fixed and .the penalized profile sampler is just the procedure of sampling from the posterior distribution of by assigning a prior on . by analyzing the corresponding mcmc chain from the frequentist s point of view, our paper obtains the following conclusions : * _ distribution approximation _ : the posterior distribution with respect to can be approximated by the normal distribution with mean the maximum penalized likelihood estimator of and variance the inverse of the efficient information matrix , with error ; * _ moment approximation _ : the maximum penalized likelihood estimator of can be approximated by the mean of the mcmc chain with error .the efficient information matrix can be approximated by the inverse of the variance of the mcmc chain with error ; * _ confidence interval approximation _ : an exact frequentist confidence interval of wald s type for can be estimated by the credible set obtained from the mcmc chain with error .obviously , given any smoothing parameter satisfying the upper bound in ( [ smooth ] ) , the penalized profile sampler can yield first order frequentist valid inference for , similar as to what was shown for the profile sampler in .moreover , the above conclusions are actually second order frequentist valid results , whose approximation accuracy is directly controlled by the smoothing parameter . note that the corresponding results for the usual ( non - penalized ) profile sampler with nuisance parameter convergence rate in are obtained by replacing in the above with and with , for all respective occur where is as defined in ( [ convr ] ) .our results are the first higher order frequentist inference results for penalized semiparametric estimation .the layout of the article is as follows .the next section , section 2 , introduces the two main examples we will be using for illustration : partly linear regression for current status data and semiparametric logistic regression . some background is given in section 3 , including the concept of a least favorable submodel as well as some notations and the main model assumptions . in section [ rate ] , some preliminary results are developed , including three rather different theorems concerning the convergence rates of the penalized nuisance parameters and the order of the estimated penalty term under different conditions .the corresponding rates for the two featured examples are also calculated in this section .the main results and implications are discussed in section 5 , and all remaining model assumptions are verified for the examples in section 6 . a brief discussion of future work is given in section 7 .we postpone all technical tools and proofs to the last section , section 8 .in this example , we study the partly linear regression model with normal residue error . the continuous outcome , conditional on the covariates , is modeled as where is an unknown smooth function , and with finite variance . for simplicity ,we assume for the rest of the paper that .the theory we propose also works when is unknown , but the added complexity would detract from the main issues .we also assume that only the current status of response is observed at a random censoring time .in other words , we observe , where indicator .current status data may occur due to study design or measurement limitations .examples of such data arise in several fields , including demography , epidemiology and econometrics . for simplicity of exposition, is assumed to be one dimensional . under the model ( [ eg2lik ] ) and given that the joint distribution for does not involve parameters , the log - likelihood for a single observation at is where is the standard normal distribution .the parameter of interest , , is assumed to belong to some compact set in .the nuisance parameter is the function , which belongs to the sobolev function class of degree .we further make the following assumptions on this model .we assume that is independent given .the covariates are assumed to belong to some compact set , and the support for random censoring time is an interval ] . to ensure the identifiability of the parameters , we assume that is positive and that the support of contains at least distinct points in ] , with , , such that for each there is a such that . the -entropy number ( -bracketing entropy number ) is defined as ( ) .before we present the first theorem , define for a known constant : [ ratethm1 ] assume conditions ( [ entrocon ] ) , ( [ eg1bou ] ) , ( [ egsm3 ] ) and ( [ egsm1 ] ) below hold for every and : then we have for satisfying . condition ( [ entrocon ] ) determines the order of the increments of the empirical processes indexed by . a detailed discussion about how to compute the increments of the empirical processes can be found in chapter 5 of .condition ( [ eg1bou ] ) is equivalent to the condition that is bounded away from zero uniformly in for ranging over . given that the distance function in ( [ egsm3 ] ) is just , ( [ egsm3 ] ) trivially holds provided that condition ( [ eg1bou ] ) holds . for the verification of ( [ egsm1 ] ), we can do an analysis as follows .the natural taylor expansions of the criterion function around the maximum point implies that , and ( [ disine ] ) implies that given condition ( [ eg1bou ] ) .we now apply theorem [ ratethm1 ] to derive the related convergence rates in the partly linear model in corollary [ eg1rate ] . however , we need to strengthen our previous assumptions to require the existence of a known such that , where and that the density for the joint distribution is strictly positive and finite .the additional assumptions here guarantee condition ( [ eg1bou ] ) .the following theorem [ nuirate ] and theorem [ ratethm3 ] can also be employed to derive the convergence rate of the non - penalized estimated nuisance parameter by setting to zero .however , we would need to assume that for some known when applying these theorems .thus we can argue that the the penalized method enables a relaxation of the assumptions needed for the nuisance parameter .[ eg1rate ] under the above set - up for the partly linear normal model with current status data , we have , for , moreover , if we also assume that for some known , then provided condition ( [ smooth ] ) holds .corollary [ eg1rate ] implies that the convergence rate of the estimated nuisance parameter is slower than that of the the regular nuisance parameter by comparing ( [ eg1pratre ] ) and ( [ eg10ratre ] ) .this result is not surprising since the slower rate is the trade off for the smoother nuisance parameter estimator .however the advantage of the penalized profile sampler is that we can control the convergence rate by assigning the smoothing parameter with different rates .corollary [ eg1rate ] also indicates that and .note that the convergence rate of the maximum penalized likelihood estimator , , is deemed as the optimal rate in .similar remarks also hold for corollary [ eg2rate ] below .the boundedness condition ( [ eg1bou ] ) appears hard to achieve in some examples .hence we propose theorem [ nuirate ] below to relax this condition by choosing the criterion function ] that are measurable for every probability measure .then , t8 .let and be classes of measurable functions . then for any probability measure and any , and , provided and are bounded by 1 , [ techdis ] the proof of t1 is found in .t1 implies that the sobolev class of functions with known bounded sobolev norm is -donsker . t2 and t3 are separately lemma 3.4.2 and theorem 2.7.11 in .( [ disine ] ) in t4 relates the kullback - leibler divergence and hellinger distance .its proof depends on the inequality that for every .t5 is lemma 9.2 in .t6 is a result presented on page 79 of and is a special case of lemma 5.13 on the same page , the proof of which can be found in pages 7980 .t7 and t8 are separately lemma 15.2 and 9.24 in ._ proof of theorem [ ratethm1 ] : _ the definition of implies that note that by t6 and assumption ( [ entrocon ] ) , we have by assumption ( [ egsm1 ] ) , we have combining with the above , we can deduce that where , and .the above inequality follows from assumption ( [ egsm3 ] ) . combining all of the above inequalities, we can deduce that where and .the equation ( [ ratee2 ] ) implies that . inserting into ( [ ratee1 ] ), we can know that , which implies has the desired order .this completes the whole proof . _ proof of corollary [ eg1rate ] : _ conditions ( [ eg1bou])([egsm1 ] ) can be verified easily in this example based on the arguments in theorem [ ratethm1 ] because has finite second moment , and is bounded away from zero and infinity uniformly for ranging over the whole parameter space .note that by taylor expansion . then by the assumption that is positive definite , we know that implies .thus we only need to show that the -bracketing entropy number of the function class defined below is of order to complete the proof of ( [ eg1pratre])([eg1j ] ) : for some constant .note that can be rewritten as : where and , where and where we know by t1 .we next calculate the -bracketing entropy number with norm for the class of functions . by some analysis we know that is strictly decreasing in for , and because is bounded uniformly over .in addition , we know that because the function has bounded derivative for uniformly over .the above two inequalities imply that the -bracketing number with uniform norm is of order for ] and is intrinsically bounded over $ ] .hence we can show that the frchet derivatives of and for any are bounded operators , from which we can deduce that is bounded by the product of some integrable function and .this ensures ( [ smcon2 ] ) and ( [ smcon3 ] ) . for ( [ nobias3 ] ), can be written as since .note that .this implies that .however , by the common taylor expansion , we have .this proves ( [ nobias3 ] ) .we next verify assumption e1 . for the asymptotic equicontinuity condition ( [ smcon1 ] ) ,we first apply analysis similar to that used in the proof of lemma [ eg2le0 ] to obtain by lemma 7.1 in , we know that and is bounded in probability by a multiple of . now we construct the set as follows : clearly , the probability that the function approaches 1 as .we next show that by t2 .note that depends on in a lipschitz manner .consequently , we can bound by the product of some constant and in view of t3 , where is as defined in the proof of lemma [ eg2le0 ] . by similar calculations as those performed in lemma [ eg2le0 ] , we can obtain .thus , and ( [ smcon1 ] ) follows .next we define .similar arguments as those used in the proof of lemma [ eg2le0 ] can be directly applied to the verification of ( [ eg2em3 ] ) in this second model . by the form of , the entropy number for is bounded above by that of .similarly , we know .moreover , the are uniformly bounded .this completes the proof for ( [ eg2em3 ] ) .the proof of ( [ eg2em1 ] ) and ( [ eg2em2 ] ) follows arguments quite similar to those used in the proof of lemma [ eg2le0 ] . in other words, we can show that and .this concludes the proof. _ proof ._ is bounded above and below by and respectively . by the third order taylor expansion of around ,for and , and the above empirical no - bias conditions ( [ emno1 ] ) and ( [ emno2 ] ) , we can find that the order of the difference between and is . by the inequality , we know that provided assumptions ( [ smooth ] ) and ( [ jrate ] ) hold .similar analysis also applies to the lower bound .this proves ( [ lnplexpps]).
the penalized profile sampler for semiparametric inference is an extension of the profile sampler method obtained by profiling a penalized log - likelihood . the idea is to base inference on the posterior distribution obtained by multiplying a profiled penalized log - likelihood by a prior for the parametric component , where the profiling and penalization are applied to the nuisance parameter . because the prior is not applied to the full likelihood , the method is not strictly bayesian . a benefit of this approximately bayesian method is that it circumvents the need to put a prior on the possibly infinite - dimensional nuisance components of the model . we investigate the first and second order frequentist performance of the penalized profile sampler , and demonstrate that the accuracy of the procedure can be adjusted by the size of the assigned smoothing parameter . the theoretical validity of the procedure is illustrated for two examples : a partly linear model with normal error for current status data and a semiparametric logistic regression model . as far as we are aware , there are no other methods of inference in this context known to have second order frequentist validity .
a markov network ( mn ) is a popular probabilistic graphical model that efficiently encodes the joint probability distribution for a set of random variables of a specific domain .mns usually represent probability distributions by using two interdependent components : an independence structure , and a set of numerical parameters over the structure .the first is a qualitative component that represents structural information about a problem domain in the form of conditional independence relationships between variables .the numerical parameters are a quantitative component that represents the strength of the dependencies in the structure .there is a large list of applications of mns in a wide range of fields , such as computer vision and image analysis , computational biology , biomedicine , and evolutionary computation , among many others .for some of these applications , the model can be constructed manually by human experts , but in many other problems this can become unfeasible , mainly due to the dimensionality of the problem . learning the model from dataconsists of two interdependent problems : learning the structure ; and given the structure , learning its parameters .this work focuses on the task of learning the structure .the structures learned may be used to construct accurate models for inference tasks ( such as the estimation of marginal and conditional probabilities ) , and also may be interesting per se , since they can be used as interpretable models that show the most significant interactions of a domain .the first scenario is known in practice as the density estimation goal of learning , and the second one is known as the knowledge discovery goal of learning [ chapter 16 ] .an interesting approach to mn structure learning is to use constraint - based ( also known as independence - based ) algorithms .such algorithms proceed by performing statistical independence tests on data , and discard all structures inconsistent with the tests .this is an efficient approach , and it is correct under the assumption that the distribution can be represented by a graph , and that the tests are reliable .however , the algorithms that follow this approach are quite sensitive to errors in the tests , which may be unreliable for large conditioning sets .a second approach to mn structure learning is to use score - based algorithms .such algorithms formulate the problem as an optimization , combining a strategy for searching through the space of possible structures with a scoring function measuring the fitness of each structure to the data .the structure learned is the one that achieves the highest score .it is important to mention that both constraint - based and score - based approaches have been originally motivated by distinct learning goals . according to the existing literature , constraint - based methods are generally designed for the knowledge - discovery goal of learning , and their quality is often measured in terms of the correctness of the structure learned ( structural errors ) .in contrast , most score - based approaches have been designed for the density estimation goal of learning , and they are in general evaluated in terms of inference accuracy . for this reason ,score - based algorithms often work by considering the whole mn at once during the search , interleaving the parameters learning step .this makes them more accurate for inference tasks .however , since learning the parameters is known to be np - hard for mns , it has a negative effect on their scalability .recently , there has been a recent surge of interest towards efficient methods based on a bayesian approach .this strategy follows a score - based approach , but with the knowledge discovery goal in mind .basically , an undirected graph structure is learned by obtaining the probabilistic maximum - a - posteriori structure .such contributions consist in the design of efficient scoring functions for mn structures , expressing the problem formally as follows : given a complete training data set , find an undirected graph such that where is the posterior probability of a structure , and is the familiy of all the possible undirected graphs for the domain size .this class of algorithms has been shown to outperform constraint - based algorithms in the quality of the learned structures .the contribution of this paper follows this hybrid approach .the method proposed in this work can improve the quality of structure learning by examining the _ irregularity _ of each structure . according to ,the irregularity of an undirected graph can be computed by summing the imbalance of its edges : where is the degree of the node in that graph .clearly if and only if is regular . for non - regular graphs is a measure of the lack of regularity .although there are more complex measures of irregularity for undirected graphs , this nave definition will suffice for the purposes of this work . in this work, we present the _ blankets joint posterior _ ( bjp ) as a score that computes the posterior probability of mn structures by taking advantage of the irregularities of the evaluated structure .this allow us to improve the learning process for domains with complex networks , where the topologies exhibit irregularities , which is a common property in many real - world networks .after providing some preliminaries , notations and definitions in section [ sec : stateoftheart ] , we introduce the bjp scoring function in section [ sec : bjp ] .section [ sec : experiments ] shows our experiments for several study cases .finally , section [ sec : conclusions ] summarizes this work , and poses several possible directions of future work .we begin by introducing the notation used for mns . then we provide some additional background about these models and the problem of learning their independence structure , and also discuss the state - of - the - art of mn structure learning .have as a finite set of indexes , lowercase subscripts for denoting particular indexes , e.g. , , and uppercase subscripts for subsets of indexes , e.g. , .let be the set of random variables of a domain , denoting single variables as single indexes in , e.g. , when .for a mn representing a probability distribution its two components are denoted as follows : , and . is the structure , an undirected graph where the nodes are the indices of each random variable of the domain , and is the edge set of the graph .a node is a neighbor of when the pair .the edges encode direct probabilistic influence between the variables .instead , the absence of an edge manifests that the dependence could be mediated by some other subset of variables , corresponding to conditional independences between these variables .a variable is conditionally independent of another non - adjacent variable given a set of variables if .this is denoted by ( or for the dependence assertion ) . as proven by ,the independences encoded by allow the decomposition of the joint distribution into simpler lower - dimensional functions called factors , or potential functions .the distribution can be factorized as the product of the potential functions over each clique ( i.e. , each completely connected sub - graph ) of , that is where is a constant that normalizes the product of potentials .such potential functions are parameterized by the set of numerical parameters . for each variable of a mn , its markov blanket ( mb )is composed by the set of all its neighbor nodes in the graph .hereon we denote the mb of a variable as .an important concept that is satisfied by mns is the local markov property , formally described as : * * local markov property*. a variable is conditionally independent of all its non - neighbor variables given its mb .that is by using such property , the conditional independences of can be read from the structure .this is done by considering the concept of separability .each pair of non - adjacent variables are said to be separated by a set of variables when every path between and in contains some node in . in machine learning ,statistical independence tests are a well - known tool to decide whether a conditional independence is supported by the data .examples of independence tests used in practice are mutual information , pearson s and , the bayesian statistical test of independence , and the partial correlation test for continuous gaussian data .such tests require the construction of a contingency table of counts for each complete configuration of the variables involved ; as a result , they would have an exponential cost in the number of variables .for this reason , the use of the local markov property has a positive effect for learning independence structures , allowing the use of smaller tests .accordingly , the scoring function proposed in this work takes advantage of this property to avoid the computation of potentially expensive and unreliable tests .this is achieved by examining the irregularities present in a structure .the structure of a mn can be learned from a training dataset , assumed to be a representative sample of the underlying distribution .commonly , has a tabular format , with a column for each variable of the domain , and one row per data point .this work assumes that each variable is discrete , with a finite number of possible values , and that no data point in has missing values .as mentioned in section [ sec : intro ] , this work focuses on the bayesian approach for mn structure learning of ( [ eq : maxg ] ) . for this reason , in this subsection we discuss two recently proposed scoring functions that follow such approach : the marginal pseudo likelihood ( mpl ) score , and the independence - based score ( ib - score ) . in mpl , each graph is scored by using an efficient approximation to the posterior probability of structures given the data .this score approximates the posterior by considering .since the data likelihood of the graph is in general extremely hard to evaluate , mpl utilizes the well - known approximation called the pseudo - likelihood .this score was proved to be consistent , that is , in the limit of infinite data the solution structure has the maximum score . for finding the mpl - optimal structure ,two algorithms were presented : an exact algorithm using pseudo - boolean optimization , and a fast alternative to the exact method , which uses greedy hill - climbing with near - optimal performance .this algorithm learns the mb for each variable , locally optimizing the mpl for each node , independently of the solutions of the other nodes . for this, it uses an approximate deterministic hill - climbing procedure similar to the well - known iamb algorithm .finally , a global graph discovery method is applied by using a greedy hill - climbing algorithm , searching for the structure with maximum mpl score , but only restricting the search space to the conflicting edges . the independence - based score ( ib - score ) , is also based on the computation of the posterior , but using the statistics of a set of conditional independence tests . in this score the posterior is computed by combining the outcomes of a set of conditional independence assertions that completely determine .such set was called the _ closure _ of the structure , denoted .thus , when using ib - score the problem of structure learning is posed as the maximization of the posterior of the closure for each structure .formally , applying the chain rule over the posterior of the closure , the ib - score approximates such probability by assuming that all the independence assertions in the closure are mutually independent .the resulting scoring funtion is computed as : where each term is computed by using the bayesian statistical test of conditional independence .together with the ib - score , an efficient algorithm called ibmap - hc is presented to learn the structure by using a heuristic local search over the space of possible structures .this section proposes the blankets joint posterior ( bjp ) , a scoring function to compute the posterior probability of the independence structure of a mn .in particular , bjp has been designed in order to accurately approximate the posterior of structures for cases where the underlying structure contains irregularities .the correctness of bjp is discussed in the appendix [ app : correctness ] .consider some graph representing the independence structure of a positive mn .it is a well - known fact that , by exploiting the graphical properties of such models , the independence structure can be decomposed as the unique collection of the mbs of the variables ( * ? ? ?* theorem 4.6 on p. 121 ) .thus , the computation of the posterior probability of given a dataset is equivalent to the joint posterior of the collection of mbs of , that is , in contrast with previous works , where the mb posteriors are simply assumed to be independent , the chain rule is applied to ( [ eq : gdecomposed ] ) , obtaining in this way the posterior probability of each mb can be described in terms of conditional probabilities , using the training dataset as evidence , together with the mb of the other variables .the computation of has to be done progressively , first calculating the posterior of the mb of a variable , and then , the knowledge obtained so far can be used as evidence to compute the posterior of the mb of other variables .however , this decomposition is not unique , since each possible ordering for the variables is associated to a particular decomposition . the basic idea underlyingthe computation of bjp is to sort the mbs by their size ( that is , the degree of the nodes in the graph ) in ascending order .this allows a series of inference steps , in order to avoid the computation of expensive and unreliable probabilities , and obtaining a more accurate data efficiency .this is due to the fact that as the size of the mb increases , greater amounts of data are required for accurately estimating its posterior probability . by using the proposed strategy ,the blanket posteriors of variables with fewer neighbors are computed first , and this information is used as evidence when computing the posteriors for variables with bigger blankets . as a result , the information obtained from the more reliable blanket posteriors is used for computing less reliable blankets posteriors .now consider an example probability distribution with four variables , represented by a mn whose independence structure is given by the graph of figure [ fig : hub ] .when sorting its nodes by their degree in ascending order , the vector can be obtained , and the blankets joint posterior is decomposed as this example allows us to illustrate the intuition behind bjp , since the sample complexity of the blanket posterior for variables , , and is lower than that of . for the sake of clarity ,appendix [ appendix : example ] shows the complete computation of the bjp score for this example . ]given an undirected graph , denote the ordering vector which contains the variables sorted by their degree in ascending order .therefore , we reformulate ( [ eq : jointblankets2_withoutorder ] ) as we now proceed to express the posterior of a mb in terms of probabilities of conditional independence and dependence assertions . the computation of can be derived from the posterior of the independences and dependences represented by each mb : the two factors in this equation will be interpreted as follows : * the first product computes the probability of independence between and its non - adjacent variables , conditioned on its mb , given the previously computed mbs and the dataset .it can be computed as + + here , indexes over the variables for which the mb posterior probability is not already computed . for the remaining variables the posterior of independence will be simply inferred as 1 .this inference can be done since the independence is determined by the mb of , which is in the evidence .we discuss the correctness of this inference step in appendix [ app : correctness ] . * the second product in ( [ eq : blanketposterior ] )computes the posterior probability of dependence between and its adjacent variables , conditioned on its remaining neighbors , given the mbs computed previously and the dataset .it can be computed as + + here , again indexes over the variables for which the mb posterior is not already computed . for the remaining variablesthe posterior of dependence will be inferred as 1 .also , this inference can be done since the dependence is determined by the mb of , which is in the evidence .the correctness of this inference step is also discussed in appendix [ app : correctness ] .the only approximation in bjp is made in ( [ eq : blanketposterior ] ) , by assuming that all the independence and dependence assertions that determine the mb of a variable are mutually independent .this is a common assumption , made implicitly by all the constraint - based mn structure learning algorithms , and also by the mpl score and the ib - score .for the computation of the posterior probabilities of independence and dependence used in ( [ eq : ciposterior ] ) and ( [ eq : cdposterior ] ) , respectively , bjp uses the bayesian test of , in the same way as the ib - score explained in the previous section .precisely , this statistical test computes the posterior of independence and dependence assertions , and has been proven to be statistically consistent in the limit of infinite data .we now discuss the computational complexity of the score . for a fixed structure ,the computational cost is directly determined by the number of statistical tests that it is required to perform on data .recall that the computational cost of each test is exponential in the number of variables involved . as stated in ( [ eq : jointblankets2 ] ), bjp computes the posterior probability of the mb for the variables of the domain .for each , it is required to perform statistical tests on data , by using ( [ eq : blanketposterior ] ) .then , one half of the tests are inferred when computing the posterior of independences and dependences of ( [ eq : ciposterior ] ) and ( [ eq : cdposterior ] ) .thus , only tests are required for computing the bjp score of a structure .we end this section with the optimization proposed in this work for learning the structure with the bjp score .the nave optimization consists in maximizing over all the possible undirected graphs for some specific problem domain , as in ( [ eq : maxg ] ) , computing with ( [ eq : jointblankets2 ] ) the score for each structure .since the discrete optimization space of the possible graphs grows rapidly with the number of variables , the search is clearly intractable even for small domain sizes .hence , in this work we test the performance of bjp with brute force only for small domains .for larger domains we use the ibmap - hc algorithm , as an efficient approximate solution proposed in .the optimization made by ibmap - hc is a simple heuristic hill - climbing procedure .the search is initialized by computing the score for an empty structure with no edges , and nodes .the hill - climbing search starts with a loop that iterates by selecting the next candidate structure at each iteration .a nave implementation of hill - climbing would select the neighbor structure with maximum score , computing the score for the neighbors that differ in one edge .such expensive computation is avoided by selecting the next candidate with a heuristic that flips the most promising edge .once the next candidate is selected , its score is computed to be compared to the best scoring structure found so far .the algorithm stops when the neighbor proposed does not improve the current score .this section presents several experiments in order to determine the merits of bjp in practical terms .two sets of experiments from low - dimensional and high - dimensional problems are presented .for the low - dimensional setting , we used brute force ( i.e. , exhaustive search ) to study the convergence of the scoring functions to the exact solution .the goal is to prove experimentally that the sample complexity for successfully learning the exact structure is better for bjp than for the competitors . for the high - dimensional setting , we used hill - climbing optimization for all the scoring functions .this experiments were performed in order to prove that , by using a similar search strategy , bjp identifies structures with fewer structural errors than the selected competitors .the software to carry out the experiments has been developed in java , and it is publicly available .a mn scoring function is consistent when the structure which maximizes the score over all the possible structures is the correct one , in the limit of infinite data .however , in practice the data is often too scarce to satisfy this condition , and the sample size needed to reach the correct structure varies across different scoring functions .this is referred to as the _ sample complexity _ of the score .the experiments here presented were carried out in order to measure the sample complexity of three different scoring functions : mpl , ib - score and bjp .this is achieved by measuring their ability to return , by brute force , the exact independence structure of the mn which generated the data .0.2 ) ; model 2 has ; model 3 has ; model 4 has ; models 5 and 6 have the maximum irregularity for six variables ( ).,title="fig : " ] 0.2 ) ; model 2 has ; model 3 has ; model 4 has ; models 5 and 6 have the maximum irregularity for six variables ( ).,title="fig : " ] [ fig : model2 ] + + + 0.2 ) ; model 2 has ; model 3 has ; model 4 has ; models 5 and 6 have the maximum irregularity for six variables ( ).,title="fig : " ] 0.2 ) ; model 2 has ; model 3 has ; model 4 has ; models 5 and 6 have the maximum irregularity for six variables ( ).,title="fig : " ] + + + 0.2 ) ; model 2 has ; model 3 has ; model 4 has ; models 5 and 6 have the maximum irregularity for six variables ( ).,title="fig : " ] 0.2 ) ; model 2 has ; model 3 has ; model 4 has ; models 5 and 6 have the maximum irregularity for six variables ( ).,title="fig : " ] to make this comparative study , we selected the six different target structures shown in figure [ fig : n6graphs ] .these graphs represent different cases of irregularity , according to ( [ eq : irregularity ] ) .the first target structure is regular ( irr = 0 ) , the second has a little irregularity , the third and fourth structures are irregular structures with a hub topology , and the fifth and sixth target structures have maximum irregularity for . for constructing a probability distribution from these independence structures according to ( [ eq : gibbs ] ) ,random numeric values were assigned to their maximal clique factors , sampled independently from a uniform distribution over .ten distributions were generated for each target structure , considering only binary discrete variables .then , for each one , ten different random seeds were used to obtain datasets for each graph , by using the gibbs sampling tool of the open - source libra toolkit .the gibbs sampler was run with 100 burn - in and 1000 sampling iterations , as commonly used in other works .since we have variables , the search space consists of different undirected graphs . the experiment consisted of evaluating the number of true structures returned by each score over the 100 datasets .this is called here the success rate of the scoring function .the success rate is computed for increasing dataset sizes .of course , since greater sizes of the dataset lead to better estimations , affects the quality of the structure learned .therefore , a score is considered better than another score when its success rate converges to with lower values of .table [ table : consistency ] shows the results of the experiment .the first column shows the target structures , the second shows their irregularity , the third shows each sample size used , and the fourth shows the success rate . for all the cases , it can be seen how the success rate of the three scoring functions grows with the sample size .the results in the fourth column show that bjp has a better success rate in almost all cases . for structures 1 and 2, ib - score shows better convergence than bjp , but they would eventually converge similarly for greater sizes .in contrast , for structures 3 , 4 , 5 and 6 , bjp has in general the best success rate .for all the cases mpl has a slower convergence than ib - score and bjp .this is consistent with the experimental results shown in , where the quality for mpl with irregular structures is reported as very low .interestingly , bjp obtains improvements in success rate of up to 8.4% respect to ib - score , and up to 59% respect to mpl . in general , these results are consistent with the hypothesis of this work , since bjp has been designed to improve the sample complexity when learning irregular structures .the following section shows the performance of the three scoring functions for more complex domains ..success rate of bjp , ib - score and mpl over 100 datasets for the target structures on figure [ fig : n6graphs ] .rates in bold face correspond to the best case .[ table : consistency ] [ cols="^,^,^,^,^,^,^ " , ] in this section , experiments in higher - dimensional setting are presented . for this, we evaluate the quality of the structures learned by using an approximate search mechanism . the bjp score andthe ib - score were tested with the ibmap - hc algorithm proposed in , explained at the end of section [ sec : bjp ] .the mpl scoring function was tested with the most efficient optimization algorithm proposed in , described in section [ sec : structurelearningalgs ] .the goal in the experiments is to show how the bjp score can improve the quality of the structures learned over the competitor scores , mainly for irregular underlying structures . for this , the selected graphs capture the properties of several real - world problems , where the target structure has fewer nodes with large degrees , and the remaining nodes have very small degree .examples of problems with this characteristic include gene networks , protein interaction networks and social networks .thus , for this comparative study , we used two types of structures : hubs and scale - free networks generated by the barabasi - albert model .these structures have an increasing complexity both in and in . additionally , we used four real - world networks , taken from the sparse matrix collection of .the hub networks are shown in figure [ fig : hubs ] , the scale - free networks are shown in figure [ fig : scalefree ] , and the real - world networks are shown in figure [ fig : realworldnets ] .for each target structure we generated random distributions and random samples for each distribution , with the gibbs sampler tool of the libra toolkit .thus , a total of datasets were obtained for each graph , with the same procedure explained in the previous section . as a quality measure ,we report the average edge hamming distance between the hundred learned structures and the underlying one , computed as the sum of false positives and false negatives in the learned structure . as in the previous section ,the algorithms were executed for increasing dataset sizes , to asses how their accuracy evolves with data availability .* hub 1 * , title="fig : " ] * hub 2 * , title="fig : " ] + + * hub 3 * , title="fig : " ] * hub 4 * , title="fig : " ] + + * scale - free 1 * , title="fig : " ] + ' '' '' + * scale - free 2 * , title="fig : " ] + ' '' '' + * scale - free 3 * , title="fig : " ] + ' '' '' + * scale - free 4 * , title="fig : " ] + ' '' '' + * a ) karate * , title="fig : " ] + ' '' '' + * b ) curtis-54 * , title="fig : " ] + ' '' '' + * b ) will-57 * , title="fig : " ] + ' '' '' + * dolphins * , title="fig : " ] + ' '' '' + table [ table : hubs ] shows the comparison of bjp against mpl and ib - score for the hub structures of figure [ fig : hubs ] . the table shows the structures , their sizes , and their irregularities , in the first , second and third columns , respectively .the dataset sizes are in the fourth column .the fifth column shows the average and standard deviation of the hamming distance over the repetitions .the sixth column shows the corresponding runtimes ( in seconds ) .when analyzing these results , it can be seen that for all the algorithms the more complex the underlying structure ( determined by and ) , the larger is the number of structural errors for any value of .the results show that bjp obtains the best performance , reducing the number of errors of the structures learned for all the cases .when compared to ib - score , the improvements are more important as and grow .this is because in those cases bjp uses a set of independence tests with lower sample complexity than ib - score to estimate the posterior of the structures .it can also be seen that , for all the target structures , mpl has the slowest convergence in .this is consistent with the results shown in the previous section , obtained by using brute force . in terms of the respective runtimes , the optimization using the bjp score obtains in general runtimes comparable to mpl and ib - score . for the case of hub 4, bjp shows the best runtime for all the cases where .this is because the more complex the underlying structure the better the convergence of the bjp score to correct structures .table [ table : scalefree ] shows the comparison of bjp against mpl and ib - score for the scale - free networks of figure [ fig : scalefree ] .the information of the table is organized in the same way as in table [ table : hubs ] .for all the scores , it can be seen that the trends in these results are similar to those of the hub structures .in contrast with the hub structures , in the scale - free networks the size of the blankets is more variable . this can explain the diference in the trends of the hamming distance , when compared with the results obtained for the hub networks .for the two most complex structures ( scale - free 3 and 4 ) , bjp reduces the number of errors of the structures learned in all the cases . in terms of the respective runtimes, bjp obtains the best runtimes for almost all the cases .specifically , for scale - free 2 , for all the cases where ; for scale - free 3 , for all the cases ; and for scale - free 4 , for all the cases where .as the complexity of the target structures grows , we can see a better convergence of the bjp score to correct structures .finally , table [ table : realnets ] show the results for the real - world networks of figure [ fig : realworldnets ] .again , the information of this table is organized in the same way as in the previous tables .the real network structures are ordered by their complexity ( in and ) .the trends in these results are consistent to those in the previous tables , in terms of quality and runtime .for the karate , curtis-54 and will-57 networks , bjp improves the quality of the structures learned for all the cases when . when ib - score obtains the best qualities .however , the differences in favor of ib - score are not statistically significant , and the runtime of the optimization is one or two orders of magnitude slower compared to bjp . for the dolphins network, bjp improves the quality of the structure learned for all the cases . regarding the runtimes, it can be seen again that bjp tends to improve the runtime over mpl and ib - score for almost all the cases . in general, the results discussed confirm that bjp always outperforms the competitors when data are scarce . also , the improvements are greater both in quality and runtime , for the more complex models .this confirms the hypothesis that the bjp score takes advantage of irregularities to optimize the sample complexity .in this work we have introduced a novel scoring function for learning the structure of markov networks .the bjp score computes the posterior probability of independence structures by considering the joint probability distribution of the collection of markov blankets of the structures .the score computes the posterior of each markov blanket progressively , using information of other blankets as evidence .the blanket posteriors of variables with fewer neighbors is computed first , and then this information is used as evidence for computing the posteriors for variables with bigger blankets .thus , bjp can be useful to improve the data efficiency for problems with complex networks , where the topology exhibits irregularities , such as social and biological networks . in the experiments ,bjp scoring proved to improve the sample complexity when compared with the state - of - the - art competitors .the score is tested by using exhaustive search for low - dimensional problems and by using a heuristic hill - climbing mechanism for higher - dimensional problems .the results show that bjp produces more accurate structures than the selected competitors .we will guide our future work toward the design of more effective optimization methods , since the hill - climbing optimization has two inherent disadvantages : i ) by only flipping one edge per step it scales slowly with the number of variables of the domain , ii ) it is prone to getting stuck in local optima .moreover , we consider that the properties of bjp score have considerable potential for both further theoretical development , and applications .this work was supported by consejo nacional de investigaciones cientficas y tcnicas ( conicet ) [ pip 2013 117 ] , universidad nacional del litoral ( unl ) [ cai+d 2011 548 ] and agencia nacional de promocin cientfica y tecnolgica ( anpcyt ) [ pict 2014 2627 ] and [ pict-2012 - 2731 ] .
markov networks are extensively used to model complex sequential , spatial , and relational interactions in a wide range of fields . by learning the structure of independences of a domain , more accurate joint probability distributions can be obtained for inference tasks or , more directly , for interpreting the most significant relations among the variables . however , the performance of current available methods for learning the structure is heavily dependent on the choice of two factors : the structure representation , and the approach for learning such representation . this work follows the probabilistic maximum - a - posteriori approach for learning undirected graph structures , which has gained interest recently . thus , the _ blankets joint posterior _ score is designed for computing the posterior probability of structures given data . in particular , the score proposed can improve the learning process when the solution structure is irregular ( that is , when there exists an imbalance in the number of edges over the nodes ) , which is a property present in many real - world networks . the approximation proposed computes the joint posterior distribution from the collection of markov blankets of the structure . essentially , a series of conditional distributions are calculated by using , information about other markov blankets in the network as evidence . our experimental results demonstrate that the proposed score has better sample complexity for learning irregular structures , when compared to state - of - the - art scores . by considering optimization with greedy hill - climbing search , we prove for several study cases that our score identifies structures with fewer errors than competitors .
the beam interlock system ( bis ) of the cern accelerator chain is responsible for transmitting the beam permit along the large hadron collider ( lhc ) , the super proton synchrotron ( sps ) , the transfer lines and the ps booster .the beam permit loop signals are two different square signals with frequencies ( loop a ) and ( loop b ) , sent in opposite directions .these beam permit loop signals are transmitted over single - mode optic fibre .[ fig : bpl ] shows the topology of the beam permit loops at the lhc .there are two signals for each of the two beams , one transmitted clockwise and the other anti - clockwise .there are seventeen beam interlock controllers ( bic ) in the lhc , two at each lhc point , named with the point number and side ( left or right ) , and one at the cern control centre ( ccc ) , named ccr .the two generators of the beam permit signal , named cibg , are installed in point 6 , where the dump system is located . in the sps, there is a similar architecture , with one bic per point , and two loops for one beam .injection and extraction lines also have their own beam permit loops .the controllers receive the users inputs , coming from the user systems .these inputs are connected with a logical and inside the controller , resulting in the local permit at the bic .if the local permit is true , then the bic re - transmits the beam permit signal to the next bic .a total of 12 fibres are deployed at each controller : one for each incoming signal and one for each outgoing signal , for a total of eight active fibres .there are wo spare fibres to each of the neighbour controllers .the distance between controllers is varied , as short as a metre and as long as 6 kilometres .the beam permit loops and the implementation of the optical links are described in .the controls interlocks beam optical ( cibo ) board designed at cern uses an eled single - mode transmitter and a pin diode receiver to implement the optical transceiver for the beam permit signals .the working wavelength of the loops is , and the receiver has a sensitive response in the range .the output power of the transmitter is typically between and and the cibo board is designed to deliver around .the g.652 type optical fibres have a maximum length of , with an initial worst case attenuation of at .a number of false dumps have occurred that may have been caused by increased attenuation in the fibres , what drives the need for a monitoring system to evaluate their performance during operation .radiation can both create point defects in the silica and activate already existing defects , causing radiation induced attenuation ( ria ) .therefore , a system to monitor the fibres attenuation is advisable . such a monitoring system for the lhc and sps fibres must not interfere with the beam permit loop signals , as they are critical to the accelerator operation. it would be convenient to have measurements of the fibres attenuation over time , and the capability to monitor both spare and active fibres .the existing beam permit loop signals can not be measured directly due to the tight power margins involved .the use of a tap with sufficient coupling ratio that allows for the measurement of the optical power is discouraged because of the high extra losses on the links . as an alternative ,the transmission of a separate optical signal over the existing fibres is proposed .the use of wavelength division multiplexing enables the transmission of multiple signals over the same fibre .a wavelength division multiplexer ( wdm ) is a passive and bi - directional device , which has one common port , in which multiple wavelengths can be transmitted at the same time , and two ( or more ) ports which only allow one wavelength .a large wavelength separation of the monitoring signal with respect to the beam permit loop signals was evaluated .the chosen wavelength is , which is in a separate window from the beam permit loop transmitter . nevertheless , the receiver is sensitive to this window , an effect that has to be evaluated in order to verify that the beam permit loops are not disturbed by the monitoring signal .standard commercial components for the system were chosen in order to ease development and deployment .the transceivers are standard small form - factor pluggable ( sfp ) .these typically implement a diagnostics information interface which allows for the measurement of transmitted and received power .commercial network switches are used to house , power and access the monitoring information on the sfps .a computer is used to connect to the switches , retrieve the information and process it . in order to be able to separate the optical signals , commercial wdmshave been evaluated and a system has been proposed and tested . the isolation provided by these devices is tight for systems , as these are high power devices , in the order of , two orders of magnitude higher than the beam permit loop signals .nevertheless , disabling the ports on the switch eliminates the interference from the sfp transceiver to the beam permit loops receivers , while still permitting the power measurements .the transmission of the beam permit loop signals through wdms does not cause any significant signal degradation . a proposed topology to monitor a linkis shown in fig .[ fig : monitorsystem ] , where the two points of one beam permit loop link are shown .the monitoring signal is sent in the opposite direction from the beam permit , to take advantage of a higher isolation between the two input ports of the wdm . due to the large power margin of the sfp transceivers , attenuatorsare also used in this topology to reduce the power in addition to the wdm isolation .a first test system was installed in the sps spare fibres , between points ba4 and ba6 .a bic was installed and configured in ba5 , together with two switches that held the sfp transceivers to monitor the attenuation .the bic was set to latch mode and the whole system ran for 30 days , without observing any losses of the beam permit .attenuators were connected at the transmitter outputs of the sfp transceivers , with an attenuation of .a drawing of the test setup is shown in fig .[ fig : spstest ] .the topology effectively emulates two separate locations , with one cibo transceiver at each one , and the monitoring system connected to two fibres .the system is deployed at the lhc , measuring a selection of spare fibres around the ring . in order to avoid any interference with the system due to the connection of the wdms , no livefibres are monitored . since the measuredfibres are spares and there is no beam permit signal on those , no wdms are required .the information from this setup is nonetheless useful to track whether the fibres are being affected by radiation while the lhc is in operation . in order to limit the number of switches , three locations were selected to install the switches : points 1 , 3 and 7 .loopback connections are in place so the fibres are measured from each switch alone .bypass connections are used at the shortest paths at the same lhc point to go to the next lhc point .the monitored fibres are ( all returning to the first point ) : * us15 to ua23 ( point 1-point 2 ) . * us15 to ccr ( point 1-ccr ) .* us15 to ua87 ( point 1-point 8) .* sr3 to ua43 ( point 3-point 4 ) . * sr3 to ua27 , with a bypass at uj33 ( point 3-point 2 ) . * sr7 to ua67 ( point 7 to point 6 ) . * sr7 to ua83 , with a bypass at tz76 ( point 7-point 8) .the topology of the deployed monitoring system is shown in fig .[ fig : sparemonitor ] .the choice of the fibres to be monitored was made so they are close to high radiation areas , such as the collimators in points 3 and 7 .an upgrade of the beam interlock system should be ready , at the earliest , for the long shutdown 2 , starting in 2018 .some of the components used in the design of the current bis are close to becoming deprecated by the manufacturers . in order to ensurethe availability of the beam interlock system in the future , selection of the components is now under study . in order to take advantage of their reliability , interoperability and the ease of insertion and extraction ,sfp transceivers are being used for the research and development phase of the next version of the beam interlock system .the monitoring and diagnostics capabilities of these devices , combined with the availability of manufacturers makes it an appropriate choice .in addition , the variety of configurations that they allow is ample enough to cover many scenarios at the cern accelerator complex .sfp transceivers from most manufacturers have versions with different output power values , leading to several power margins that can be used to accommodate the various fibre lengths of each link , or even to compensate for increased fibre attenuation .the bis upgrade must be compatible with the current systems , meaning the interfaces and signals have to be compatible with the original system . due to the compatibility requirement, the new programmable devices have to be carefully chosen in order to be capable of implementing all input and output signals that exist in the current bis .the implementation of the beam permit loops will also be studied , checking the possibility of transmitting data messages instead of a fixed loop frequency .fibre attenuation increase due to radiation is a concern for the bis , as it affects its reliability and availability of the lhc .the current system lacks monitoring capability and the addition of a measurement system is troublesome .the use of commercially available , standardised sfp transceivers enables the measurement of the optic fibre attenuation , either active fibres through wavelength division multiplexing or spare fibres .sfp transceivers are also being evaluated for the next version of the beam interlock system , to transmit the beam permit loop signals at the cern accelerator complex .the flexibility and features provided by these transceivers are of great interest to the next version of the bis .t. wijnands , l. k. de jonge , j. kuhnhenn , s. k. hoeffgen , u. weinand , `` optical absorption in commercial single mode optical fibers in a high energy physics radiation field '' , ieee trans .55 ( 2008 ) 2216 - 2222
the optical fibres that transmit the beam permit loop signals at the cern accelerator complex are deployed along radiation areas . this may result in increased attenuation of the fibres , which reduces the power margin of the links . in addition , other events may cause the links not functioning properly and result in false dumps , reducing the availability of the accelerator chain and affecting physics data taking . in order to evaluate the state of the fibres , an out - of - band fibre monitoring system is proposed , working in parallel to the actual beam permit loops . the future beam interlock system to be deployed during lhc long shutdown 2 will implement online , real - time monitoring of the fibres , a feature the current system lacks . commercial off - the - shelf components to implement the optical transceivers are proposed whenever possible instead of ad - hoc designs .
malliavin weight sampling ( mws ) is a method for computing derivatives of averaged system properties with respect to parameters in stochastic simulations .the method has been used in quantitative financial modelling to obtain the `` greeks '' ( price sensitivities ) ; and , as the girsanov transform , in kinetic monte carlo simulations for systems biology .similar ideas have been used to study fluctuation - dissipation relations in supercooled liquids .however , mws appears to be relatively unknown in the fields of soft matter , chemical and biological physics , perhaps because the theory is relatively impenetrable for non - specialists , being couched in the language of abstract mathematics ( _ e.g. _ , martingales , girsanov transform , malliavin calculus , _ etc ._ ) ; an exception in financial modelling is ref . .mws works by introducing an auxiliary stochastic quantity , the malliavin weight , for each parameter of interest .the malliavin weights are updated alongside the system s usual ( unperturbed ) dynamics , according to a set of rules .the derivative of any system function , , with respect to a parameter of interest is then given by the average of the product of with the relevant malliavin weight , or in other words by a weighted average of , in which the weight function is given by the malliavin weight .importantly , mws works for non - equilibrium situations , such as time - dependent processes or driven steady states .it thus complements existing methods based on equilibrium statistical mechanics , which are widely used in soft matter and chemical physics .mws has so far been discussed only in the context of specific simulation algorithms . in this paper , we present a pedagogical and generic approach to the construction of malliavin weights , which can be applied to any stochastic simulation scheme .we further describe its practical implementation in some detail using as our example one dimensional brownian motion in a force field .the rules for the propagation of malliavin weights have been derived for the kinetic monte - carlo algorithm , for the metropolis monte - carlo scheme and for both underdamped and overdamped brownian dynamics . here, we present a generic theoretical framework , which encompasses these algorithms and also allows extension to other stochastic simulation schemes .we suppose that our system evolves in some state space , and a point in this state space is denoted as . here, we assume that the state space is continuous , but our approach can easily be translated to discrete or mixed discrete - continuous state spaces .since the system is stochastic , its state at time is described by a probability distribution , . in each simulation step, the state of the system changes according to a propagator , , which gives the probability that the system moves from point to point during an application of the update algorithm .the propagator has the property that where is the probability distribution after the update step has been applied and the integral is over the whole state space .we shall write this in a shorthand notation as integrating eq . over , we see that the propagator must obey .it is important to note , however , that we do _ not _ assume the detailed balance condition , for some equilibrium .thus , our results apply to systems whose dynamical rules do not obey detailed balance ( such as chemical models of gene regulatory networks ) , as well as to systems out of steady state .we observe that the ( finite ) product is proportional to the probability of occurrence of a trajectory of states , , and can be interpreted as a _ trajectory weight_.let us now consider the average of some quantity , , over the state space , in shorthand the quantity , , might well be a complicated function of the state of the system : for example the extent of crystalline order in a particle - based simulation , or a combination of the concentrations of various chemical species in a simulation of a biochemical network .we suppose that we are interested in the sensitivity of to variations in some parameter of the simulation , which we denote as .this might be one of the force field parameters ( or the temperature ) in a particle - based simulation or a rate constant in a kinetic monte carlo simulation .we are interested in computing .this quantity can be written as where let us now suppose that we track in our simulation not only the physical state of the system , but also an auxiliary stochastic variable , which we term . at each simulation step , is updated according to a rule that depends on the system state ; this does not perturb the system s dynamics , but merely acts as a `` readout '' . by tracking ,we _ extend _ the state space , so that becomes .we can then define the average , which is an average of the value of in the extended state space , with the constraint that the original ( physical ) state space point is fixed at ( see further below ) .our aim is to define a set of rules for updating , such that ,_ i.e. _ , such that the average of the auxiliary variable , for a particular state space point , measures the _ derivative _ of the probability distribution with respect to the parameter of interest , . if this is the case then , from eq . the auxiliary variable , , is the malliavin weight corresponding to the parameter , .how do we go about finding the correct updating rule ?if the malliavin weight exists , we should be able to derive its updating rule from the system s underlying stochastic equations of motion .we obtain an important clue from differentiating eq . with respect to .extending the shorthand notation , one finds this strongly suggests that the rule for updating the malliavin weight should be in fact , this is correct .the proof is not difficult and , for the case of brownian dynamics , can be found in the supplementary material for ref .it involves averaging eq . in the extended state space , . from a practical point of view , for each time step , we implement the following procedure : * propagate the system from its current state , , to a new state , , using the algorithm that implements the stochastic equations of motion ( brownian , kinetic monte - carlo , _ etc . _ ) ; * with knowledge of and , and the propagator , , calculate the change in the malliavin weight ; * update the malliavin weight according to . at the start of the simulation , the malliavin weight is usually initialised to .let us first suppose that our system is not in steady state , but rather the quantity in which we are interested is changing in time , and likewise is a time - dependent quantity . to compute , we run independent simulations , in each one tracking as a function of time , and the product , .the quantities and are then given by [ eq : samp ] & \frac{\partial{\langle a(t)\rangle}}{\partial\lambda}\approx \frac{1}{n}\sum_{i=1}^n a_i(t)\,q_{\lambda , i}(t)\,,\end{aligned}\ ] ] where is the value of recorded in the simulation run ( and likewise for ) .error estimates can be obtained from replicate simulations .if , instead , our system is in steady state , the procedure needs to be modified slightly .this is because the variance in the values of across replicate simulations increases linearly in time ( this point is discussed further below ) . for long times , computation of using eq .therefore incurs a large statistical error .fortunately , this problem can easily be solved , by computing the correlation function \rangle}\ , .\label{eq : c1}\ ] ] in steady state , , with the property that as . in a single simulation run ,we simply measure and at time intervals separated by ( which is typically multiple simulation steps ) . at each measurement , we compute $ ] .we then average this latter quantity over the whole simulation run to obtain an estimate of . for this estimate to be accurate , we require that is long enough that has reached its plateau value ; this typically means that should be longer than the typical relaxation time of the system s dynamics .the correlation function approach is discussed in more detail in refs . . returning to a more theoretical perspective, it is interesting to note that the rule for updating the malliavin weight , eq . , depends deterministically on and .this implies that the value of the malliavin weight at time is completely determined by the trajectory of system states during the time interval , .in fact , it is easy to show that where is the trajectory weight defined in eq . .similar expressions are given in refs .thus , the malliavin weight , , is not fixed by the state point , , but by the entire trajectory of states that have led to state point . since many different trajectories can lead to , many values of are possible for the same state point , .the average is actually the expectation value of the malliavin weight , averaged over all trajectories that reach state point at time .this can be used to obtain an alternative proof that .suppose we sample trajectories , of which end up at state point ( or a suitably defined vicinity thereof , in a continuous state space ) .we have .then , the malliavin property implies , and hence , .up to now , we have assumed that the quantity , , does not depend on the parameter , . there may be cases , however , when does have an explicit -dependence . in these cases ,eq . should be replaced by this reveals a kind of ` algebra ' for malliavin weights : we see that the operations of taking an expectation value and taking a derivative can be commuted , provided the malliavin weight is introduced as the commutator .we can also extend our analysis further to allow us to compute higher derivatives with respect to the parameters .these may be useful , for example , for increasing the efficiency of gradient - based parameter optimisation algorithms .taking the derivative of eq .with respect to a second parameter , , gives & \quad={\bigl\langle \frac{\partial^2\ !a}{\partial\lambda\partial\mu}\bigr\rangle } + { \bigl\langle \frac{\partial a}{\partial\lambda}\ , q_\mu \bigr\rangle } + { \bigl\langle a\,\frac{\partial q_\lambda}{\partial\mu}\bigr\rangle}\\[6pt ] & { } \hspace{9em } { } + { \bigl\langle \frac{\partial a}{\partial\mu}\,q_\lambda \bigr\rangle } + { \langle a\,q_\lambda \,q_\mu\rangle}\nonumber \\[6pt ] & = { \langle a\,(q_{\lambda\mu}+q_\lambda q_\mu)\rangle } + { \bigl\langle \frac{\partial a}{\partial\lambda}\,q_\mu \bigr\rangle } + { \bigl\langle \frac{\partial a}{\partial\mu}\,q_\lambda \bigr\rangle } + { \bigl\langle \frac{\partial^2\ ! a}{\partial\lambda\partial\mu}\bigr\rangle}\,.\nonumber\end{aligned}\ ] ] in the second line , we iterate the commutation relation and , in the third line , we collect like terms and introduce in the case where is independent of the parameters , this result simplifies to the quantity , , here is a new , second order malliavin weight , which , from eqs . and , satisfies to compute second derivatives with respect to the parameters, we should therefore track these second order malliavin weights in our simulation , updating them alongside the existing malliavin weights by the rule a corollary , if we take as a constant in eqs . and respectively, is that quite generally and .steady state problems can be approached by extending the correlation function method to second order weights .define , _ cf . _ eq . , \\[3pt ] & \hspace{9em}{}-[q_{\lambda\mu}(t')+q_\lambda(t ' ) q_\mu(t')]\}\rangle\ , .\end{split}\ ] ] as in the first order case , in steady state , we expect , with the property that as .we now demonstrate this machinery by way of a practical but very simple example , namely one - dimensional ( overdamped ) brownian motion in a force field . in this case, the state space is specified by the particle position , , which evolves according to the langevin equation in this is the force field and is gaussian white noise of amplitude , where is temperature . without loss of generalitywe have chosen units so that there is no prefactor multiplying the force field .we discretise the langevin equation to the following updating rule where is the time step and is a gaussian random variate with zero mean and variance . corresponding to this updating rule is an explicit expression for the propagator this follows from the statistical distribution of .let us suppose that the parameter of interest , , enters into the force field ( the temperature , , could also be chosen as a parameter ) . making this assumption we can simplify this result by noting that from eq ., . making use of this , the final updating rule for the malliavin weight is where is the _ exact same _ value that was used for updating the position in eq . .because the value of is the same for the updates of position and of , the change in is completely determined by the end points , and .the derivative , , should be evaluated at , since that is the position at which the force is computed in eq . .since in eq. is a random variate uncorrelated with , averaging eq .shows that .as the initial condition is , this means that , as predicted in the previous section .is essentially the same as that derived in ref . . if we differentiate eq . with respect to a second parameter , , we get & \hspace{7em}{}-\frac{{\delta t}}{2t}\,\frac{\partial f}{\partial\lambda } \,\frac{\partial f}{\partial\mu}\ , .\end{split } \label{eq : e4}\ ] ] hence , the updating rule for the second order malliavin weight can be written as where , again , is the exact same value as that used for updating the position in eq . .if we average eq . over replicate simulation runs , we find .hence , the mean value , , drifts in time , unlike or .however , one can show that the mean value of the sum , , is constant in time and equal to zero , as long as , initially , .now , let us consider the simplest case of a particle in a linear force field , ( also discussed in ref .this corresponds to a harmonic trap with the potential .we let the particle start from at and track its time - dependent relaxation to the steady state .we shall set for simplicity .the langevin equation can be solved exactly for this case , and the mean position evolves according to we suppose that we are interested in derivatives with respect to both and , for a `` baseline '' parameter set in which is finite , but .taking derivatives of eq . and setting , we find & \frac{\partial^2{\langle x(t)\rangle}}{\partial h\partial \kappa } = \frac{t e^{-\kappa t}}{\kappa } - \frac{1-e^{-\kappa t}}{\kappa^2}\ , .\end{split } \label{eq:1dtrap}\ ] ] we now show how to compute these derivatives using malliavin weight sampling . applying the definitions in eqs . and , the malliavin weight increments are and the position update itself is we track these malliavin weights in our simulation and use them to calculate derivatives according to & \frac{\partial^2{\langle x(t)\rangle}}{\partial h\partial \kappa } = { \langle x(t ) ( q_{h\kappa}(t)+q_h(t ) q_\kappa(t))\rangle}\ , .\end{split } \label{eq : mws2}\ ] ] eqs . have been coded up as a matlab script , described in appendix [ app : script ] . a typical result generated by running this scriptis shown in fig .eqs . andare iterated with up to , for a trap strength and initial position .the weighted averages in eq . are evaluated as a function of time , for samples , as in eq . .these results are shown as the solid lines in fig .the dashed lines are theoretical predictions for the time dependent derivatives from eqs . .as can be seen , the agreement between the time - dependent derivatives and the malliavin weight averages is very good .( top curve , blue ) , ( middle curve , green ) and ( bottom curve , red ) . solid lines ( slightly noisy ) are the malliavin weight averages , generated by running the matlab script described in appendix [ app : script ] . dashed lines are theoretical predictions from eqs . .] as discussed briefly above , in this procedure , the sampling error in the computation of is expected to grow with time .[ fig2 ] shows the mean square malliavin weight as a function of time for the same problem .for the first order weights , and , the growth rate is typically linear in time . indeed , from eqs ., one can prove that in the limit ( see appendix [ app : anal ] ) thus behaves exactly as a random walk , as should be obvious from the updating rule .the other weight , , also ultimately behaves as a random walk , since in steady state ( from equipartition ) .[ fig2 ] also shows that the second order weight , , grows superdiffusively ; one can show that , eventually , , although the transient behaviour is complicated .full expressions are given in appendix [ app : anal ] .this suggests that computation of second order derivatives is likely to suffer more severely from statistical sampling problems than the computation of first order derivatives .in this paper , we have provided an outline of the generic use of malliavin weights for sampling derivatives in stochastic simulations , with an emphasis on practical aspects .the usefulness of mws for a particular simulation scheme hinges on the simplicity , or otherwise , of constructing the propagator , , which fixes the updating rule for the malliavin weights according to eq . .the propagator is determined by the algorithm used to implement the stochastic equations of motion ; mws may be easier to implement for some algorithms than for others .we note , however , that there is often some freedom of choice about the algorithm , such as the choice of a stochastic thermostat in molecular dynamics , or the order in which update steps are implemented . in these cases ,a suitable choice may simplify the construction of the propagator and facilitate the use of malliavin weights .ooo rosalind j. allen is supported by a royal society university research fellowship . .here , we present analytic results for the growth in time of the mean square malliavin weights . we can express the rate of growth of the mean of a generic function , , as on the right - hand side ( rhs ) , the values of , , and are substituted from the updating rules in eqs . and .in calculating the rhs average , we note that the distribution of is a gaussian independent of the position and malliavin weights , and thus , one can substitute , , , , _ etc._. proceeding in this way , with judicious choices for , one can obtain the following set of coupled ordinary differential equations ( odes ) & \frac{d{\langle x^2\rangle}}{dt}+2\kappa{\langle x^2\rangle}=2\,,\quad \frac{d{\langle x q_h\rangle}}{dt}+\kappa{\langle x q_h\rangle}=1\,,\nonumber\\[9pt ] & \frac{d{\langle x^2q_h^2\rangle}}{dt}+2\kappa{\langle x^2q_h^2\rangle } = 2{\langle q_h^2\rangle}+4{\langle x q_h\rangle}+\frac{{\langle x^2\rangle}}{2}\,,\nonumber\\[9pt ] & \frac{d{\langle xq_hq_\kappa\rangle}}{dt}+\kappa{\langle xq_hq_\kappa\rangle } = -{\langle x q_h\rangle}-\frac{{\langle x^2\rangle}}{2}\,,\\[9pt ] & \frac{d{\langle ( q_{h\kappa}+q_hq_\kappa)^2\rangle}}{dt } = \frac{{\langle q_\kappa^2\rangle}}{2}-{\langle x q_hq_\kappa\rangle } + \frac{{\langle x^2q_h^2\rangle}}{2}\nonumber\\[6pt ] & \hspace{9em}\bigl({}=\frac{{\langle ( q_\kappa - x q_h)^2\rangle}}{2}\bigr)\,.\nonumber\end{aligned}\ ] ] some of these have already been encountered in the main text .the last one is for the desired mean square second order weight .the odes can be solved with the initial conditions that at , all averages involving malliavin weights vanish , but .the results include _inter alia_ & { \langle ( q_{h\kappa}+q_hq_\kappa)^2\rangle}= \frac{2\kappa^2t^2+(19+\kappa x_0 ^2)\kappa t+2\kappa x_0 ^ 2 - 34 } { 8\kappa^3}\nonumber\\[9pt ] & \hspace{6em}{}+\frac{2\kappa t+10-\kappa x_0 ^ 2}{2\kappa^3 } \,e^{-\kappa t}\nonumber\\[6pt ] & \hspace{9em}{}+ \frac{(1-\kappa x_0 ^ 2)\kappa t+2\kappa x_0 ^ 2 - 6}{8\kappa^3}\,e^{-2\kappa t}\ , .\label{eq : app1}\end{aligned}\ ] ] these are shown as the dashed lines in fig .the leading behaviour of the last as is however , the approach to this limit is slow .the matlab script in listing [ list1 ] was used to generate the results shown in fig .it implements eqs . above , making extensive use of the compact matlab syntax for array operations , for instance , invoking ` . * ' for element - by - element multiplication of arrays . here is a brief explanation of the script . _lines 13 _ initialise the problem and the parameter values . _ lines 4 _ and _ 5 _ calculate the number of points in a trajectory and initialise a vector containing the time coordinate of each point ._ lines 69 _ set aside storage for the actual trajectory , malliavin weights and cumulative statistics . _ lines 1023 _implement a pair of nested loops , which are the kernel of the simulation . within the outer ( trajectory sampling ) loop , _ line 11 _ initialises the particle position and malliavin weights , _ line 12 _ precomputes a vector of random displacements ( gaussian random variates ) and _ lines 1318 _ generate the actual trajectory . within the inner (trajectory generating loop ) , _ lines 1417 _ are a direct implementation of eqs . and .after each individual trajectory has been generated , the cumulative sampling step implied by eq . is done in _lines 1922 _ ; after all the trajectories have been generated , these quantities are normalised in _ lines 24 _ and _ 25_. finally , _ lines 2632 _ generate a plot similar to fig . [ fig1 ] ( albeit with the addition of ) , and _lines 33 _ and _ 34 _ show how the data can be exported in tabular format for replotting using an external package .listing [ list1 ] is complete and self - contained .it will run in either matlab or octave .one minor comment is perhaps in order .the choice was made to precompute a vector of gaussian random variates , which are used as random displacements to generate the trajectory and update the malliavin weights .one could equally well generate random displacements on - the - fly , in the inner loop .for this one - dimensional problem , storage is not an issue , and it seems more elegant and efficient to exploit the vectorisation capabilities of matlab . for a more realistic three - dimensional problem , with many particles ( and a different programming language ) , it is obviously preferable to use an on - the - fly approach . + .... clear all randn('seed ' , 12345 ) ; kappa = 2 ; x0 = 1 ; tend = 5 ; dt = 0.01 ; nsamp = 10 ^ 5 ; npt = round(tend / dt ) + 1 ; t = ( 0:npt-1 ) ' * dt ; x = zeros(npt , 1 ) ; xi = zeros(npt , 1 ) ; qh = zeros(npt , 1 ) ; qk = zeros(npt , 1 ) ; qhk = zeros(npt , 1 ) ; x_av = zeros(npt , 1 ) ; xqh_av = zeros(npt , 1 ) ; xqk_av = zeros(npt ,1 ) ; xqhk_av = zeros(npt , 1 ) ; for samp = 1:nsamp x(1 ) = x0 ; qh(1 ) = 0 ; qk(1 ) = 0 ; qhk(1 ) = 0 ; xi = randn(npt , 1 ) * sqrt(2*dt ) ; for i = 1:npt-1 x(i+1 ) = x(i ) - kappa*x(i)*dt + xi(i ) ; qh(i+1 ) = qh(i ) + 0.5*xi(i ) ; qk(i+1 ) = qk(i ) - 0.5*x(i)*xi(i ) ; qhk(i+1 ) = qhk(i ) + 0.5*x(i)*dt ; end x_av = x_av + x ; xqh_av = xqh_av + x.*qh ; xqk_av = xqk_av + x.*qk ; xqhk_av = xqhk_av + x.*(qhk + qh.*qk ) ; end x_av = x_av / nsamp ; xqh_av = xqh_av / nsamp ; xqk_av = xqk_av / nsamp ; xqhk_av = xqhk_av / nsamp ; hold on plot(t , x_av , ' k ' ) ; plot(t , xqh_av , ' b ' ) plot(t , xqk_av , ' g ' ) ; plot(t , xqhk_av , ' r ' ) plot(t , x0*exp(-kappa*t ) , ' k-- ' ) plot(t , ( 1-exp(-kappa*t))/kappa , ' b-- ' ) plot(t ,-x0*t.*exp(-kappa*t ) , ' g-- ' ) plot(t , t.*exp(-kappa*t)/kappa-(1-exp(-kappa*t))/(kappa^2 ) , ' r-- ' ) result = [ t x_av xqh_av xqk_av xqhk_av ] ; save('result.dat ' , ' -ascii ' , ' result ' ) ....
malliavin weight sampling ( mws ) is a stochastic calculus technique for computing the derivatives of averaged system properties with respect to parameters in stochastic simulations , without perturbing the system s dynamics . it applies to systems in or out of equilibrium , in steady state or time - dependent situations , and has applications in the calculation of response coefficients , parameter sensitivities and jacobian matrices for gradient - based parameter optimisation algorithms . the implementation of mws has been described in the specific contexts of kinetic monte carlo and brownian dynamics simulation algorithms . here , we present a general theoretical framework for deriving the appropriate mws update rule for any stochastic simulation algorithm . we also provide pedagogical information on its practical implementation .
in this paper we are concerned with the following implicit convex feasibility problem ( icfp ) . given set - valued mappings , with closed and convex value sets , the icfp is , we call the sets variable sets for obvious reasons and include implicit in this problem name because the sets defining it are not given explicitly ahead of time .the problem is inspired by the work of gholami et al . on solving the cooperative wireless sensor network positioning problem in ( ) . there , the sets are circles ( balls ) with varying centers .a special instance of the icfp is obtained by taking fixed sets for all and all yielding the well - known , see , e.g. , , convex feasibility problem ( cfp ) which is , the cfp formalism is at the core of the modeling of many inverse problems in various areas of mathematics and the physical sciences .this problem has been widely explored and researched in the last decades , see , e.g. , ( * ? ? ?* section 1.3 ) , and many iterative methods where proposed , in particular projection methods , see , e.g. , . these are iterative algorithms that use projections onto sets , relying on the principle that when a family of sets is present , then projections onto the given individual sets are easier to perform than projections onto other sets ( intersections , image sets under some transformation , etc . ) that are derived from the given individual sets .+ gholami et al . in introduced the implicit convex feasibility problem ( icfp ) in ( or ) into their study of the wireless sensor network ( wsn ) positioning problem . in their reformulationthe variable sets are circles or balls whose centers represent the sensors locations and their broadcasting range is represented as the radii .some of these centers are known a priori while the rest are unknown and need to be determined .the wsn positioning problem is to find a point , in an appropriate product space , which represents the circles or balls centers .the precise relationship between the wsn problem and the icfp can be found in ( * ? ? ?* section b ) . for more details and other examples of geometric positioning problems ,see .we focus on the icfp in and present projection methods for its solution .this expands and generalizes the special case treated in gholami et al .moreover , we demonstrate the applicability of our approach to the task of image denoising , where we impose constraints on the image intensity at every image pixel . because the constraint sets depend on the unknown variables to be determined , the method is able to adapt to the image contents .this application demonstrates the usefulness of the icfp approach to image processing .the paper is structured as follows . in section [ sec : proj - implicit ]we show how to calculate projections onto variable sets . in section [ sec : algs ] we present two projection type algorithmic schemes for solving the icfp , sequential and simultaneous , along with their convergence proofs . in section[ sec : special ] we present the icfp application to image denoising together with numerical visualization of the performance of the methods .finally , in section [ sec : summary ] we discuss further research directions and propose a further generalization of the icfp .we begin by recalling the split convex feasibility problem ( scfp ) and the constrained multiple - set split convex feasibility problem ( cmsscfp ) that will be useful to our subsequent analysis .[ p : scfp ] censor and elfving .given nonempty , closed and convex sets and a linear operator , the ` split convex feasibility problem ` ( scfp ) is : another related more general problem is the following . [ p : cmsscfp ] masad and reich .let and and be nonempty , closed and convex subsets of respectively. given linear operators and another nonempty , closed and convex , the ` constrained multiple - set split convex feasibility problem ` _ _ _ _ ( cmsscfp ) is : if for all , then we obtain a multiple - set split convex feasibility problem _ _ _ _ ( msscfp ) .a prototype for the above scfp and msscfp is the * split inverse problem * ( sip ) presented in and given next .[ p : sip ] given two vector spaces and and a linear operator , we look at two inverse problems .one , denoted by ip is formulated in and the second , denoted by ip , is formulated in .the ` split inverse problem ` ( sip ) is : in different choices for ip and ip are proposed , such as variational inequalities and minimization problems .the latter enable , for example , to obtain a least - intensity feasible solution in intensity - modulated radiation therapy ( imrt ) treatment planning as in .in we further explore and extend this modeling technique to include non - linear mappings between the two spaces and .let be a nonempty , closed and convex set .for each point , there exists a unique nearest point in , denoted by , i.e. , the mapping is the _ metric projection _ of onto .it is well - known that is a _ nonexpansive _ mapping of onto , i.e. , the metric projection is characterized by the following two properties: and if is a hyperplane , then ( [ eq : projp1 ] ) becomes an equality , .we are dealing with variable convex sets that can be described by set - valued mappings .[ ex:1]for a set - valued mapping we call the sets defined below , variable sets .let be a given set , called in the sequel a core set .\(i ) given an operator , the variable sets for , are obtained from shifting by the vectors .\(ii ) given an let } : \mathbb{r}^{n}\rightarrow\mathbb{r}^{n} ] are the } ] , where is the identity matrix .next we present a lemma that shows how to calculate the metric projection onto such variable sets via projections onto the core set when the operator is linear and denoted by the fixed matrix and } ] denotes the closed interval between and . to also cover the case of boundary pixels , we assume a constant extension of the image outside the image domain , so that is well - defined for every .we remark that there is a relation to tv regularization , since the total variation of the discrete signal is minimal for .analogously to we define constraint sets for every horizontal , vertical and diagonal edge of the underlying grid graph with vertices corresponding to the pixel positions : ,\\ \omega_3^{i , j } & : = [ \min({y}_{i+1,j+1},{y}_{i-1,j-1}),\max({y}_{i+1,j+1},{y}_{i-1,j-1})],\\ \omega_4^{i , j } & : = [ \min({y}_{i+1,j-1},{y}_{i-1,j+1}),\max({y}_{i+1,j-1},{y}_{i-1,j+1})].\\ \end{aligned } \label{eq : denoising : constraint_sets}\ ] ] in total , we end up with four different constraint sets for pixel . notethat we can express each set in the form+m_{s}^{i , j}({y}),\ ] ] where and , and , , defined accordingly for the vertical and the two diagonal directions .we further slightly generalize the sets by introducing a scaling factor and re - define+m_{s}^{i , j}({y}),\text { for } s=1,2,3,4.\ ] ] based on these local constraint sets , we look at the cfp recall that our approach presented in section [ sec : algs ] allows to depend on , in contrast with .so , we consider the set - valued mappings +m_{s}^{i , j}(x ) , \label{eq : denoising : scaled_cij_x}\ ] ] and derive the icfp where represents the product of sets .note that the variable sets attain the form . in our computational experimentswe consider two test images .the shepp - logan phantom of figure [ fig : denoising : cfpversusicfp](a ) , displayed with gaussian noise of zero mean and variance in figure [ fig : denoising : cfpversusicfp](b ) , and an ultrasound image , displayed in figure [ fig : denoising : cfpversusicfp](c ) . for the icfp, we implemented both algorithms [ alg : gc - sim ] and algorithm [ alg : gc - seq ] .the parameters were chosen to be and iteration steps for algorithm [ alg : gc - sim ] and for and iteration steps for algorithm [ alg : gc - seq ] .if not noted otherwise , we use for the latter .we compared the performance of the cfp with that of the icfp .for both problems we focused on the simultaneous projection method of algorithm [ alg : gc - sim ] , which is known to converge even in the inconsistent case ( i.e. , the case where the intersection of constraint sets is empty ) , see , e.g , . in figure [ fig : denoising : cfpversusicfp ] we present results of comparing the cfp and icfp approaches on the noisy shepp - logan test image and on the ultrasound image that we used .we also provide close - ups for a specific region of interest , in order to highlight the differences mainly in texture .we observe that the cfp is not suited to denoise the data .in contrast , solving the icfp leads to a denoised image . in figure[ fig : denoising : varying_a ] we study the influence of the parameter in for the icfp .our experiments show , that this parameter influences the smoothness of the result .the smaller the smoother the result becomes . in figure[ fig : denoising : empty_sets ] , we plot the percentage of constraint sets , depending on the iteration index for cfp and icfp , using both algorithms for the latter .note that in the cfp the constraint sets do not vary and , therefore , we have a constant fraction of empty intersections , while in the icfp we observe that the number of empty intersections decreases significantly to a final percentage of 3.5% , showing that the constraint sets adapt to the unknown in a meaningful way .we note that algorithm [ alg : gc - seq ] requires a -steering sequence for convergence . to demonstrate the influence of this sequence on the speed of convergence we conduct an experiment with different values of .obviously the choice of influences the solution , to which the iterative sequence generated by the algorithm converges .we found that the results are of similar quality ( the ssim index proposed by wang et al . varies only in the range ) . due to the non - uniqueness of the solution, we can measure convergence only with respect to the individual solution the algorithm converges to . to this end , we assume that the sequence converges within the first 1000 steps . this assumption is satisfied , since the difference becomes small for .the plot in figure [ fig : denoising : betas ] shows the distances for during the first steps .we observe that the larger is , the faster decreases .thus , for a faster convergence a larger value of is advantageous .we conclude that the proposed icfp has the capability of denoising image data .although the approach in its current form can not cope with complex state - of - the - art denoising approaches , our experiments demonstrate the usefulness of imposing constraints on image intensities .moreover , we see the potential for further improvements of the approach , for example by additionally making the parameter depend on the unknown , or by combining the icfp with an objective function for denoising , that has to be optimized subject to the given adaptive constraints .[ c]cccc 3.5 cm & 3.5 cm & + 3.5 cm & 3.5 cm & 3.5 cm & 3.5 cm + 3.5 cm & 3.5 cm & 3.5 cm & 3.5 cm .35 ( cf . ) .observe that decreasing leads to a smoother image ., title="fig : " ] ( cf . ) . observe that decreasing leads to a smoother image ., title="fig : " ] + .35 ( cf . ) .observe that decreasing leads to a smoother image ., title="fig : " ] ( cf . ) . observe that decreasing leads to a smoother image ., title="fig : " ] + , , ( inconsistent cases ) with varying during the iterations .we compare the algorithm for the cfp ( dash - dotted ) , and for icfp algorithm [ alg : gc - sim ] ( dashed ) and the sequential algorithm [ alg : gc - seq ] ( solid ) .we observe , that for the icfp , the percentage significantly decreases during the iterations , while for the cfp by definition it is constant . ]-steering sequences with ( black solid ) , ( black dashed ) , ( gray solid ) and ( gray dashed ) in algorithm [ alg : gc - seq ] applied for smoothing the phantom image .note that the limit depends on the chosen sequence .we depict , assuming convergence within the first 1000 steps .we conclude that a larger is advantageous for a faster convergence . ]in this paper we consider the implicit convex feasibility problem ( icfp ) where the variable sets are obtained by shifting , rotating and linearly - scaling fixed , closed convex sets . by reformulating the problem as an unconstrained minimization we present two algorithmic schemes for solving the problem , one simultaneous and one sequential .we also comment that other first - order methods can be applied if , for example , the problem is phrased as a variational inequality problem .we illustrate the usefulness of the icfp as a new modeling technique for imposing constraints on image intensities in image denoising .two instances of the icfp , the wireless sensor network ( wsn ) positioning problem and the new image denoising approach suggest the applicability potential of the icfp . in this directionwe recall the nonlinear multiple - sets split feasibility problem ( nmssfp ) introduced by li et al . and later by gibali et al . . in this problemthe linear operator in the split convex feasibility problem ( [ eq : scfp ] ) is nonlinear and , therefore , the corresponding proximity function is not necessarily convex which means that additional assumptions on are required , such as differentiability . within this frameworkit will be interesting to know , for example , what are the necessary assumptions on in definition [ ex:1 ] which will guarantee convergence of our proposed schemes .another direction is when the unitary matrices are not given in advance but generated via some procedure ; for example , given a linear transformation , such that for all , }$ ] is a unitary matrix . the linearity assumption on will guarantee that our analysis here will still hold true .for a nonlinear our present analysis will not hold or , at least , not directly hold .* acknowledgments*. we thank the anonymous referees for their comments and suggestions which helped us improve the paper . the first author s work was supported by research grant2013003 of the united states - israel binational science foundation ( bsf ) .99 f. astrm , g. baravdish , and m. felsberg , a tensor variational formulation of gradient energy total variation . in x .-tai , e. bae , t. chan , and m. lysaker , editors , _ energy minimization methods in computer vision and pattern recognition _ ,volume 8932 of _ lecture notes in computer science _ , pages 307320 .springer , 2015 .m. r. gholami , l. tetruashvili , e. g. strm and y. censor , cooperative wireless sensor network positioning via implicit convex feasibility , _ ieee transactions on signal processing _ * 61 * ( 2013 ) , 58305840 .m. r. gholami and h. wymeersch and e. g. strm and m. rydstrm , wireless network positioning as a convex feasibility problem , _ eurasip journal on wireless communications and networking _ * 161 * ( 2011 ) , 115 .f. lenzen and j. berger , solution - driven adaptive total variation regularization , in j .-aujol , m. nikolova , and n. papadakis , editors , _ proceedings of ssvm 2015 _ , volume 9087 of _ lncs _ , pages 203215 , 2015 . c. popa , _ projection algorithms - classical results and developments : applications to image reconstruction _ , lambert academic publishing - av akademikerverlag gmbh & co. kg , saarbrcken , germany , 2012 .y. xiao , y. censor , d. michalski and j.m .galvin , the least - intensity feasible solution for aperture - based inverse planning in radiation therapy , _ annals of operations research _ * 119 * ( 2003 ) , 183203 .
the implicit convex feasibility problem attempts to find a point in the intersection of a finite family of convex sets , some of which are not explicitly determined but may vary . we develop simultaneous and sequential projection methods capable of handling such problems and demonstrate their applicability to image denoising in a specific medical imaging situation . by allowing the variable sets to undergo scaling , shifting and rotation , this work generalizes previous results wherein the implicit convex feasibility problem was used for cooperative wireless sensor network positioning where sets are balls and their centers were implicit . * keywords * : implicit convex feasibility split feasibility projection methods variable sets proximity function image denoising
the mobile ad hoc networks ( manets ) , a class of self - autonomous and flexible wireless networks , are highly appealing for lots of critical applications , like disaster relief , battlefield communications , d2d communications for traffic offloading , and coverage extension in future 5 g cellular networks .thus , understanding the fundamental performance limits of manets is of great importance to facilitate the application and commercialization of such networks . by now, extensive works have been devoted to the performance study of manets , which can be roughly classified into two categories , the ones with the consideration of practical limited buffer constraint and the ones without such consideration . regarding the performance study for manets without the buffer constraint ,grossglauser and tse first explored the capacity scaling law , i.e. , how the per node throughput scales in the order sense as the number of network nodes increases , and demonstrated that with the help of node mobility a per node throughput is achievable in such networks . later , neely _ studied the delay - throughput tradeoff issue in a manet under the independent and identically distributed ( i.i.d ) mobility model and showed that achievable delay - to - throughput ratio is lower bounded as ( where is the number of network nodes ) . then explored the delay - throughput tradeoff under a symmetric random walk mobility model , and showed that a average packet delay is incurred to achieve the per node throughput there . further studied the delay - throughput tradeoff under a general and unified mobility model , and revealed that there exists a critical value of delay below which the node mobility is not helpful for capacity improvement .recently , wang __ explored the throughput and delay performance for manets with multicast traffic in , and further conducted the network performance comparison between the unicast and multicast manets in .those results indicate that the mobility can significantly decrease the multicast gain on per node capacity and delay , and thus weaken the distinction between the two traffic models .while the above works represent a significant progress in the performance study of manets , in a practical manet , however , the buffer size of a mobile node is usually limited due to both its storage limitation and computing limitation .thus , understanding the real achievable performance of manets under the practical limited buffer constraint is of more importance for the design and performance optimization of such networks . by now , some initial results have been reported on the performance study of manets under buffer constraint . specifically , herdtner and chong explored the throughput - storage tradeoff in manets and showed that the throughput capacity under the relay buffer constraint scales as ( where is the relay buffer size of a node ) . considered a manet with limited source buffer in each node , and derived the corresponding cumulative distribution function of the source delay .recently , the throughput and delay performance of manets are further explored under the scenarios where each node is equipped with an infinite source buffer and a shared limited relay buffer .the motivation of our study is to take a step forward in the practical performance modeling for manets . in particular, this paper focuses on a practical manet where each network node maintains a limited source buffer of size to store its locally generated packets and also a limited shared relay buffer of size to store relay packets for all other nodes .this buffer constraint is general in the sense it covers all the buffer constraint assumptions adopted in available works as special cases , like the infinite buffer assumption ( , ) , limited source buffer assumption ( ) , and limited relay buffer assumption ( ) . to the best of our knowledge, this paper represents the first attempt on the exact performance modeling for manets with the general limited - buffer constraint .the main contributions of this study are summarized as follows : * based on the queuing theory and birth - death chain theory , we first develop a general theoretical framework to fully depict the source / relay buffer occupancy process in a manet with the general limited - buffer constraint , which applies to any distributed mac protocol and any mobility model that leads to the uniform distribution of nodes locations in steady state .* with the help of this framework , we then derive the exact expressions of several key network performance metrics , including achievable throughput , throughput capacity , and expected end - to - end ( e2e ) delay .we also provide the related theoretical analysis to reveal the fundamental network performance trend as the buffer size increases . *we further conduct case studies under two network scenarios and provide the corresponding theoretical / simulation results to demonstrate the efficiency and application of our theoretical framework .finally , we present extensive numerical results to illustrate both the impacts of buffer constraint on network performance and our theoretical findings .the remainder of this paper is organized as follows .section [ section : preliminaries ] introduces preliminaries involved in this paper .the framework for the buffer occupancy process analysis is developed in section [ section : framework ] .we derive exact expressions for throughput , throughput capacity and expected e2e delay in section [ section : performance ] , and conduct case studies in section [ section : case_studies ] .the numerical results and corresponding discussions are provided in section [ section : numerical_results ] .finally , we conclude this paper in section [ section : conclusion ] .in this section , we first introduce the system model , the general limited buffer constraint , the routing scheme and performance metrics involved in this study , and then present our overall framework for manet performance modeling under the general buffer constraint . _ network model _ : we consider a time - slotted manet , which consists of nodes randomly moving in a torus network area following a `` uniform type '' mobility model . with such mobility model ,the location process of a node is stationary and ergodic with stationary distribution uniform on the network area , and the trajectories of different nodes are independent and identically distributed .it is notable that such `` uniform type '' mobility model covers many typical mobility models as special cases , like the i.i.d model , random walk model , and random direction model ._ traffic model _ : there are unicast traffic flows in the network , and each node is the source of one traffic flow and also the destination of another traffic flow .more formally , let denote the destination node of the traffic flow originated from node , then the source - destination pairs are matched in a way that the sequence is just a derangement of the set of nodes .the packet generating process at each node is assumed to a bernoulli process with mean rate , so that with probability a new packet is generated in each time slot . during a time slot the total amount of data that can be transmitted from a transmitter to its corresponding receiveris fixed and normalized to one packet . as illustrated in fig .[ fig : buffer_constraint ] , we consider a general limited buffer constraint , where a node is equipped with a limited source buffer of size and a limited relay buffer of size .the source buffer is for storing the packets of its own flow ( locally generated packets ) and works as a fifo ( first - in - first - out ) source queue , while the relay buffer is for storing packets of all other flows and works as fifo virtual relay queues ( one queue per flow ) .when a packet of other flows arrives and the relay buffer is not full , the corresponding relay queue is dynamically allocated a buffer space ; once a head - of - line ( hol ) packet departs from its relay queue , this relay queue releases a buffer space to the common relay buffer .it is notable that the limited buffer constraint we consider is general in the sense it covers all the buffer constraint assumptions adopted in the available works as special cases . regarding the packet delivery scheme , we consider the two - hop relay ( 2hr ) routing protocol .the 2hr scheme is simple yet efficient , and has been widely adopted in available studies on the performance modeling of manets .in addition to the conventional 2hr scheme without feedback , we also consider the 2hr scheme with feedback , which avoids packet loss caused by relay buffer overflow and thus can support the more efficient operation of buffer - limited manets . without loss of generality , we focus on a tagged flow and denote its source node and destination node as and respectively .once gets access to wireless channel at the beginning of a time slot , it executes the 2hr scheme without / with feedback as follows . 1 .( * source - to - destination * ) + if is within the transmission range of , executes the source - to - destination operation .if the source queue of is not empty , transmits the hol packet to ; else remains idle .if is not within the transmission range of , randomly designates one of the nodes ( say ) within its transmission range as its receiver , and chooses one of the following two operations with equal probability . *( * source - to - relay * ) + _ without feedback _ : if the source queue of is not empty , transmits the hol packet to ; else remains idle .+ _ with feedback _ : sends a feedback to to indicate whether its relay buffer is full or not .if the relay buffer of is not full , executes the same operation as that without feedback ; else remains idle . *( * relay - to - destination * ) + if has packet(s ) in the corresponding relay queue for , sends the hol packet of the queue to ; else remains idle .the performance metrics involved in this paper are defined as follows .* throughput * : the _ throughput _ of a flow ( in units of packets per slot ) is defined as the time - average number of packets that can be delivered from its source to its destination . * throughput capacity * : for the homogeneous finite buffer network scenario considered in this paper , the network level _ throughput capacity _ can be defined by the maximal achievable per flow throughput , i.e. , } t ] can be determined as where and is the normalization constant .notice that , where is a column vector of size with all elements being , we have we continue to analyze the occupancy process of the relay buffer in .let denote the number of packets in the relay buffer at time slot , then the occupancy process of the relay buffer can be regarded as a stochastic process on state space .notice that when serves as a relay in a time slot , the source - to - relay transmission and relay - to - destination transmission will not happen simultaneously .thus , suppose that the relay buffer is at state in the current time slot , only one of the following transition scenarios may happen in the next time slot : * to ( ) : the relay buffer is not full , and a packet arrives at the relay buffer .* to ( ) : the relay buffer is not empty , and a packet departures from the relay buffer . * to ( ) : no packet arrives at and departures from the relay buffer .let denote the one - step transition probability from state to state ( ) , then the occupancy process can be modeled as a birth - death chain as illustrated in fig .[ fig : birth - death_chain ] .let denote the probability that there are packets occupying the relay buffer in the stationary state , the stationary osd of the relay buffer $ ] is determined as where is the one - step transition matrix of the birth - death chain defined as , \label{eq : matrix}\ ] ] and is a column vector of size with all elements being 1 .notice that , and for , the expressions ( [ eq : balance_eq])([eq : matrix ] ) indicate that to derive , we need to determine the one - step transition probabilities and .[ lemma : transition_probability ] for the birth - death chain in fig .[ fig : birth - death_chain ] , its one - step transition probabilities and are determined as the proof is given in appendix [ appendix : transition_probability ] . by substituting ( [ eq : p_i_i+1 ] ) and ( [ eq : p_i_i-1 ] ) into ( [ eq : balance_eq ] ) and ( [ eq : normalization_eq ] ) , we can see that the stationary osd of the relay buffer is determined as where . under the scenario with feedback , node can not execute a source - to - relay transmission when the relay buffer of its intended receiver is full ( with overflow probability ) , causing the correlation between the osd analysis of source buffer and that of relay buffer .it is notable , however , the overflow probability only affects the service rate of the source buffer and the arrival rate at the relay buffer , while the occupancy processes of the source buffer and relay buffer can still be modeled as the b / b// queue and the birth - death chain respectively .thus , based on the similar analysis as that in section [ subsection : osd_nofeedback ] , we have the following corollary .[ corollary : osd_feedback ] for the network scenario with feedback , the osd of the source buffer and the osd of the relay buffer are determined as ( [ eq : osd_source ] ) and ( [ eq : osd_relay ] ) , where is given by ( [ eq : tau ] ) , and the service rate of the source buffer is evaluated as the proof is given in appendix [ appendix : osd_feedback ] .corollary [ corollary : osd_feedback ] indicates that for the evaluation of osds and , we need to determine the relay buffer overflow probability . from formula ( [ eq : osd_relay ] ) we have where we can see from ( [ eq : mu_s_fb])([eq : pi_s_0 ] ) that ( [ eq : self - mapping ] ) is actually an implicit function of , which can be solved by applying the fixed point theory .we provide in appendix [ appendix : fixed_point_iteration ] the detailed fixed - point iteration for solving .with the help of osds of source buffer and relay buffer derived in section [ section : framework ] , this section focuses on the performance analysis of the concerned buffer limited manet in terms of its throughput , expected e2e delay and throughput capacity .regarding the throughput and expected e2e delay of a manet with the general limited buffer constraint , we have the following theorem . [theorem : throughput_delay ] for a concerned manet with nodes , packet generating rate , source buffer size and relay buffer size , its per flow throughput and expected e2e delay are given by where ( resp . ) denotes the expected number of packets in the source buffer ( resp .relay buffer ) under the condition that the source buffer ( resp .relay buffer ) is not full , which is determined as and is determined by ( [ eq : mu_s_nf ] ) and ( [ eq : mu_s_fb ] ) for the scenarios without and with feedback respectively , , and are determined by ( [ eq : tau ] ) , ( [ eq : osd_source ] ) and ( [ eq : osd_relay ] ) , respectively .notice that packets of a flow are delivered to their destination through either one - hop transmission ( source - to - destination ) or two - hop transmission ( source - to - relay and relay - to - destination ) , so the per flow throughput can be derived by analyzing packet delivery rates of these two kinds of transmissions .regarding the expected e2e delay , it can be evaluated based on the analysis of expected source queuing delay and expected delivery delay of a tagged packet . for the detailed proof of this theorem, please refer to appendix [ appendix : throughput_delay ] . the formulas ( [ eq : throughput ] ) and ( [ eq : e2e_delay ] ) hold for both network scenarios without / with feedback , but different network scenarios will lead to different results of , and .based on the results of theorem [ theorem : throughput_delay ] , we can establish the following corollary ( see appendix [ appendix : feedback ] for the proof ) .[ corollary : feedback ] for a concerned manet with the general limited buffer constraint , adopting the feedback mechanism improves its throughput performance . to determine the throughput capacity , we first need the following lemma ( see appendix [ appendix : as_lambda_increase ] for the proof ) .[ lemma : as_lambda_increase ] for a concerned manet with the general limited buffer constraint , its throughput increases monotonically as the packet generating rate increases .based on lemma [ lemma : as_lambda_increase ] , we can establish the following theorem on throughput capacity .[ theorem : throughput_capacity ] for a concerned manet with nodes , source buffer size and relay buffer size , its throughput capacity is given by lemma [ lemma : as_lambda_increase ] indicates that } t = \lim\limits_{\lambda_s^{\text{\scriptsize + } } \to 1 } t. \label{eq : tc_lambda_1}\ ] ] from ( [ eq : tau ] ) , ( [ eq : osd_source ] ) and ( [ eq : osd_relay ] ) we can see that combining ( [ eq : throughput ] ) , ( [ eq : tc_lambda_1 ] ) , ( [ eq : pi_s_0_lambda_1 ] ) and ( [ eq : pi_r_br_lambda_1 ] ) , the expression ( [ eq : throughput_capacity ] ) then follows .based on the theorem [ theorem : throughput_delay ] and theorem [ theorem : throughput_capacity ] , we have the following corollary regarding the limiting and as the buffer size tends to infinity ( see appendix [ appendix : buffer_infinite ] for the proof ) .[ corollary : buffer_infinite ] for a concerned manet , its throughput increases as and/or increase , and as and/or tend to infinity , the corresponding limiting and are determined as ( 21 ) and ( 22 ) respectively , where .t= p_sd _ s + p_sr , & [ eq : t_bs_infinite ] + ( p_sd+p_sr)(1-_s(0 ) ) , & [ eq : t_br_infinite ] + \{_s^,p_sd+p_sr}. & and [ eq : t_bs_br_infinite ] [ eq : t_buffer_infinite ] \{d}= , & and + + , & and [ eq : d_bs_infinite ] + , & [ eq : d_br_infinite ] + , & , and [ eq : d_bs_br_infinite ] we can see from the theorem [ theorem : throughput_capacity ] that the throughput capacity of the concerned manet is the same for both the scenarios with and without feedback , and it is mainly determined by its relay buffer size .the corollary [ corollary : buffer_infinite ] indicates that our throughput and delay results of ( [ eq : throughput])([eq : e2e_delay ] ) are general in the sense that as tends to infinity , they reduce to the results in , while as both and tend to infinity , they reduce to the results in .in this section , we apply our theoretical framework to conduct performance analysis for two typical manet scenarios widely adopted in available studies , and present the corresponding theoretical / simulation results to demonstrate the efficiency and application of our framework .* cell - partitioned manet with local scheduling based mac ( ls - mac ) : * under this network scenario , the whole network area is evenly partitioned into non - overlapping cells . in each timeslot one cell supports only one transmission between two nodes within it , and concurrent transmissions in different cells will not interference with each other .when there are more than one node in a cell , each node in this cell becomes the transmitter equally likely .for such a manet , the corresponding probabilities , and can be determined by the following formulas ( see appendix [ appendix : basic_probabilities ] for derivations ) . * cell - partitioned manet with equivalence class based mac ( ec - mac ) : * in such a manet , the whole network area is evenly partitioned into non - overlapping cells , and each transmitter ( like the in fig .[ fig : transmission_range ] ) has a transmission range that covers a set of cells with horizontal and vertical distance of no more than cells away from the cell the transmitter reside in . to prevent simultaneous transmissions from interfering with each other ,the ec - mac is adopted . as illustrated in fig .[ fig : equivalence_class ] that with the ec - mac , all cells are divided into different ecs , and any two cells in the same ec have a horizontal and vertical distance of some multiple of cells .each ec alternatively becomes active every time slots , and each active cell of an active ec allows only one node in it ( if any ) to conduct data transmission .when there are more than one node in an active cell , each node in this cell becomes the transmitter equally likely . to enable as many number of concurrent transmissions to be scheduled as possible while avoiding interference among these transmissions , should be set as where is a guard factor specified by the protocol model .for such a manet the corresponding probabilities , and are determined by the following formulas ( see appendix [ appendix : basic_probabilities ] for derivations ) . where . to validate our theoretical framework for manet performance modeling, a simulator was developed to simulate the packet generating , packet queuing and packet delivery processes under above two network scenarios .each simulation task runs over a period of time slots , and we only collect data from the last of time slots to ensure the system is in the steady state . in the simulator , the following two typical mobility models have been implemented : * * i.i.d model : * at the beginning of each time slot , each node independently selects a cell among all cells with equal probability and then stays in it during this time slot . * * random walk ( rw ) model : * at the beginning of each time slot , each node independently selects a cell among its current cell and its adjacent cells with equal probability and then stays in it during this time slot .we summarize in fig .[ fig : validation ] the theoretical / simulation results for throughput and delay under the above two network scenarios , respectively . for each scenariowe consider the network settings of ( ) , and for the scenario with the ec - mac protocol we set and there .notice that the theoretical results here are obtained by substituting ( [ eq : p_sd_ls ] ) and ( [ eq : p_sr_ls ] ) ( resp .( [ eq : p_sd_ec ] ) and ( [ eq : p_sr_ec ] ) ) into the theoretical framework in fig .[ fig : framework ] . fig .[ fig : validation ] show clearly that the simulation results match well with the theoretical ones for all the cases considered here , which indicates that our theoretical framework is applicable to and highly efficient for the performance modeling of different buffer limited manets .we can see from fig .[ fig : throughput_ls ] and fig .[ fig : throughput_ec ] that for a manet with ls - mac or ec - mac , as the packet generating rate increases , the per flow throughput increases monotonically and finally converges to its throughput capacity , which agrees with the conclusions of lemma [ lemma : as_lambda_increase ] and theorem [ theorem : throughput_capacity ] .another interesting observation of fig .[ fig : throughput_ls ] and fig .[ fig : throughput_ec ] is that just as predicated by corollary [ corollary : feedback ] and theorem [ theorem : throughput_capacity ] , although adopting the feedback mechanism usually leads to a higher throughput , it does not improve the throughput capacity performance . regarding the delay performance, we can see from fig .[ fig : delay_ls ] and fig .[ fig : delay_ec ] that in a manet with either ls - mac or ec - mac , the behavior of expected e2e delay under the scenario without feedback is quite different from that under the scenario with feedback . as increases , in the scenario without feedback first slightly increases and then decreases monotonically , while in the scenario with feedback first slightly increases , then decreases somewhat and finally increases monotonically .the results in fig .[ fig : validation ] indicate that although adopting the feedback mechanism leads to an improvement in per flow throughput , such improvement usually comes with a cost of a larger e2e delay .this is because that the feedback mechanism can avoid the packet dropping at a relay node , which contributes to the throughput improvement but at the same time makes the source / relay buffers tend to be more congested , leading to an increase in delay .based on the proposed theoretical framework , this section presents extensive numerical results to illustrate the potential impacts of buffer constraint on network performance .notice from section [ subsection : validation ] that the performance behaviors of the ls - mac are quite similar to that of the ec - mac , in the following discussions we only focus on a manet with the ls - mac .we first summarize in fig .[ fig : t_d_vs_bs_br ] how and vary with and under the setting of ( , , ) . about the throughput performance, we can see from fig.[fig : throughput_vs_bs ] and fig.[fig : throughput_vs_br ] that just as predicated by corollary [ corollary : buffer_infinite ] and corollary [ corollary : feedback ] , increases as either or increases , and the feedback mechanism can lead to an improvement in .it is interesting to see that as increases , under the two scenarios without and with feedback converges to two distinct constants determined by ( 21a ) .as increases , however , under the two scenarios finally converges to the same constant determined by ( 21b ) . regarding the delay performance , fig .[ fig : delay_vs_bs ] shows that as increases , under the scenario without feedback quickly converges to a constant determined by ( 22b ) , while under the scenario with feedback monotonically increases to infinity , which agrees with the result of ( 22a ) .we can see from fig .[ fig : delay_vs_br ] that with the increase of , however , under the scenario without feedback monotonically increases , while under the scenario with feedback first decreases and then increases . similar to the throughput behavior in fig .[ fig : throughput_vs_br ] , fig .[ fig : delay_vs_br ] shows that as increases under the two scenarios also converges to the same constant determined by ( 22c ) . the results in fig .[ fig : t_d_vs_bs_br ] indicate that and have different impacts on the network performance in terms of and . in particular , as increases , a notable performance gap between the scenarios without and with feedback always exist , where the throughput gap converges to a constant but the corresponding delay gap tends to infinity . as increases , however , the performance gap between the two scenarios tends to decrease to , which implies that the benefits of adopting the feedback mechanism are diminishing in manets with a large relay buffer size . a further careful observation of fig .[ fig : t_d_vs_bs_br ] indicates that although we can improve the throughput by increasing or , it is more efficient to adopt a large rather than a large for such improvement .for example , under the scenario without feedback , fig .[ fig : throughput_vs_bs ] shows that by increasing from to , can be improved from to ( with an improvement of ) ; while fig . [fig : throughput_vs_br ] shows that by increasing from to , can be improved from to ( with an improvement of ) .to further illustrate how the impacts of buffer size on network performance are dependent on packet generating rate , we focus on a manet with feedback and summarize in fig . [fig : t_d_vs_bs_br_lambda ] how its throughput and delay vary with and ( ) .we can see from fig .[ fig : t_bs_lambda ] and fig .[ fig : t_br_lambda ] that although in general we can improve by increasing either or , the degree of such improvement is highly dependent on . as increases ,the throughput improvement from monotonically increases , while the corresponding improvement from first increases and then decreases .[ fig : t_bs_lambda ] and fig .[ fig : t_br_lambda ] also show that as increases , under different settings of finally converges to the same constant ( i.e. , given by ( [ eq : throughput_capacity ] ) ) , while under a given setting of converges to a distinct constant of , which monotonically increases as increases . regarding the joint impacts of and on delay performance, we can see clearly from fig .[ fig : d_bs_lambda ] that just as discussed in corollary [ corollary : buffer_infinite ] , there exists a threshold of beyond which will increases to infinity as increases , while for a given less than the threshold , almost keeps as a constant as increases .about the joint impacts of and on delay performance , fig .[ fig : d_br_lambda ] shows that for a given setting of , there also exists a threshold for , beyond which almost keeps as a constant as increases .it is interesting to see that such threshold for and the corresponding delay constant tend to increase as increases .the results in fig .[ fig : d_br_lambda ] imply that a bounded can be always guaranteed in a manet as long as its source buffer size is limited .we summarize in fig .[ fig : tc_vs_br ] how throughput capacity varies with relay buffer size , where two network settings of ( ) and ( ) are considered .[ fig : tc_vs_br ] shows that as increases , first increases quickly and then gradually converges to a constant being determined by ( [ eq : throughput_capacity ] ) .this observation indicates that although the throughput capacity can be improved by adopting a larger relay buffer , in practical network design the relay buffer size should be set appropriately according to the requirement on network capacity such that a graceful tradeoff between network performance and networking cost can be achieved .it can be observed from fig .[ fig : tc_vs_br ] that is also dependent on the number of nodes , which motivates us to further explore the scaling law of throughput capacity in such a buffer limited manet .based on ( [ eq : throughput_capacity ] ) , ( [ eq : p_sd_ls ] ) and ( [ eq : p_sr_ls ] ) , the asymptotic throughput capacity is given by where . from ( [ eq : scaling ] ) we can see that as tends to either or infinity , tends to , while if is fixed scales as as both and scale up . it is notable that in an upper bound of throughput ( with the notation ) was proposed for a manet with limited relay buffer , however , the scaling law developed here is an achievable one ( with the notation ) , which indicates that to achieve a non - vanishing throughput capacity in a manet with the general limited buffer constraint , the relay buffer size should grow at least linearly with the number of nodes . based on ( [ eq : throughput_capacity ] ) , we plot in fig .[ fig : tc_vs_n ] that how scales with under three typical buffer settings , i.e. , is fixed as a constant ( here ) , and .we can see from fig . [fig : tc_vs_n ] that in general decreases as increases , and vanishes to when is fixed , while it converges to a non - zero constant when or .this paper explored , for the first time , the performance modeling for manets under the general limited buffer constraint .in particular , a complete and generally applicable theoretical framework was developed to capture the inherent buffer occupancy behaviors in such a manet , which enables the exact expressions to be derived for some fundamental network performance metrics , like the achievable throughput , expected e2e delay and throughput capacity .some interesting conclusions that can be drawn from this study are : 1 ) in general , adopting the feedback mechanism can lead to an improvement in the throughput performance , but such improvement comes with the cost of a relatively large delay ; 2 ) for the purpose of throughput improvement , it is more efficient to adopt a large relay buffer rather than a large source buffer ; 3 ) the throughput capacity is dominated by the relay buffer size ( rather than source buffer size ) and the number of nodes ; 4 ) to ensure that a buffer - limited manet is scalable in terms of throughput capacity , its relay buffer size should grow at least linearly with the number of network nodes .based on the transition scenarios , we can see is actually equal to the packet arrival rate of the relay buffer , so we just need to determine for the evaluation of .when serves as a relay , all other nodes ( except and its destination ) may forward packets to it .when one of these nodes sends out a packet from its source buffer , it will forward the packet to with probability .this is because with probability the packet is intended for a relay node , and each of the relay nodes are equally likely .thus , where denotes the packet departure rate of a source buffer .due to the reversibility of the b / b/1/ queue , the packet departure process of the source buffer is also a bernoulli process with its departure rate being determined as then we have regarding the evaluation of transition probability , it is notable that just corresponds to the service rate of the relay buffer when it is at state . to determine , we further decompose the state ( ) into sub - states as illustrated in fig .[ fig : state_breakdown ] , where denotes the number of non - empty relay queues in the relay buffer .let denote the service rate of the relay buffer when it is at sub - state , and let denote the probability that the relay buffer is at sub - state conditioned on that the relay buffer is at state , we then have we first derive the term in ( [ eq : mu_r_i ] ) .notice that with probability the node conducts a relay - to - destination transmission , and it will equally likely choose one of the nodes ( expect and its destination ) as its receiver .thus , when there are non - empty relay queues in the relay buffer , the corresponding service rate is determined as to determine the conditional probability , we adopt the following occupancy approach proposed in . first , for the relay buffer with packets , where each packet may be destined for any one of the nodes ( except and ) , the number of all possible cases is then , for the relay buffer with packets , where these packets are destined for only different nodes , the number of possible cases is finally , since the locations of nodes are independently and uniformly distributed , each case occurs with equal probability .according to the _ classical probability _ , we have substituting ( [ eq : mu_r_il ] ) and ( [ eq : p_l|i ] ) into ( [ eq : mu_r_i ] ) , is determined as the network scenario with feedback , node can not execute a source - to - relay transmission when the relay buffer of its intended receiver is full ( with overflow probability ) , thus the service rate of source buffer of node is given by based on the similar analysis as that in section [ subsection : osd_nofeedback ] , the osd of source buffer here can also be determined by expression ( [ eq : osd_source ] ) , and the one - step transition probabilities of the birth - death chain of relay buffer can be determined as where denotes the packet arrival rate of the relay buffer when the relay buffer is not full . regarding the evaluation of , we have where denotes the packet departure rate of a source buffer , and ( [ eq : lambda_r+_fb ] ) follows from ( [ eq : lambda_s- ] ) .notice that the transition probabilities here are the same as that under the scenario without feedback , thus the osd of the relay buffer here can also be determined by expression ( [ eq : osd_relay ] ) .since is the fixed - point of equation ( [ eq : self - mapping ] ) , we apply the fixed - point iteration to solve .the detailed algorithm of the fixed - point iteration is summarized in algorithm [ algorithm : fixed_point_iteration ] .+ basic network parameters ; + relay buffer overflow probability ; set and ; ; ; ; ; ; ; ;let and denote the packet delivery rates at the destination of node through the one - hop transmission and the two - hop transmission respectively , then we have where denotes the packet departure rate of source buffer of . substituting ( [ eq : lambda_s- ] ) into ( [ eq : t1 ] ) and ( [ eq : t2 ] ), then ( [ eq : throughput ] ) follows from . regarding the expected e2e delay , we focus on a tagged packet of node and evaluate its expected source queuing delay and expected delivery delay , respectively . for the evaluation of have let ( ) denote the probability that there are packets in the source buffer conditioned on that the source buffer is not full , then is determined as where is the normalization constant . since , we have then is given by after moving to the hol in its source buffer , packet will be sent out by node with mean service time , and it may be delivered to its destination directly or forwarded to a relay .let denote the expected time that takes to reach its destination after it is forwarded to a relay , then we have based on the osd , is given by ( [ eq : relay_length ] ) .due to the symmetry of relay queues in a relay buffer , the mean number of packets in one relay queue is , and the service rate of each relay queue is .thus , can be determined as substituting ( [ eq : d_r ] ) into ( [ eq : delivery_delay ] ) , then ( [ eq : e2e_delay ] ) follows from .from expressions ( [ eq : mu_s_nf ] ) and ( [ eq : mu_s_fb ] ) , we can see that the for a given packet generating rate , the service rate of the source buffer under the scenario with feedback is smaller than that under the scenario without feedback . from ( [ eq : osd_source ] ) we have which indicates that under the scenario with feedback is smaller than that under the scenario without feedback .we let and substitute into ( [ eq : throughput ] ) , then can be expressed as where and . regarding the derivative of have where here ( [ eq : larger_0 ] ) is because that for .we can see from ( [ eq : pi_s_0_mu_s ] ) that increases as increases , and from ( [ eq : t_r])([eq : larger_0 ] ) that increases as decreases .thus , we can conclude that under the scenario with feedback is larger than that under the scenario without feedback , which indicates that adopting the feedback mechanism improves the throughput performance .for the scenario without feedback , we know from ( [ eq : osd_source ] ) that thus , as increases , decreases which leads to an increase in ( refer to the analysis in appendix [ appendix : feedback ] ) . for the scenario with feedback , as increases , the manet tends to be more congested with a larger .thus , we know from ( [ eq : mu_s_fb ] ) that the corresponding decreases , and then from ( [ eq : pi_s_0_mu_s ] ) that decreases , leading to an increase in .from an intuitive point of view , a larger buffer implies that more packets can be stored and packet loss can be reduced , thus a higher throughput can be achieved .more formally , from ( [ eq : osd_source ] ) we have where ( [ eq : derivative_b_s ] ) follows since when and when .then we can conclude that as increases , decreases , leading to an increase in .let and substitute into ( [ eq : osd_relay ] ) , then we have where then we can conclude that as increases , decreases , leading to an increase in ( refer to expression ( [ eq : throughput ] ) ) . regarding the infinite source buffer ( i.e. , ) , when , and we have according to the queuing theory , for a bernoulli / bernoulli queue ( i.e., the buffer size is infinite ) , its queue length tends to infinity when the corresponding arrival rate is equal to or larger than the service rate .thus , we have , which leads that and .when , , and we have based on the analysis in appendix [ appendix : throughput_delay ] , is determined as substituting ( [ eq : ls_bs_infinite ] ) into ( [ eq : e2e_delay ] ) we obtain ( [ eq : d_bs_infinite ] ) .regarding the infinite relay buffer ( i.e. , ) , from ( [ eq : osd_relay ] ) and ( [ eq : relay_length ] ) we have where ( [ eq : taylor_expansion ] ) and ( [ eq : taylor_expansion1 ] ) follow since is just the taylor - series expansion of , and ( [ eq : pi_r_infinite ] ) follows from the lhpital s rule . substituting ( [ eq : pi_r_infinite ] ) into ( [ eq : throughput ] )we obtain ( [ eq : t_br_infinite ] ) , and substituting ( [ eq : pi_r_infinite ] ) and ( [ eq : lr_br_infinite ] ) into ( [ eq : e2e_delay ] ) we obtain ( [ eq : d_br_infinite ] ) . regarding the manet without buffer constraint ( i.e. , and ) , we can directly obtain ( [ eq : t_bs_br_infinite ] ) and ( [ eq : d_bs_br_infinite ] ) by combining the corresponding results of the infinite source buffer scenario and the infinite relay buffer scenario .for a cell - partitioned manet with ls - mac , the event that node gets an opportunity of source - to - destination ( resp .source - to - relay or relay - to - destination ) transmission in a time slot can be divided into the following sub - events : ( 1 ) its destination is ( resp . is not ) in the same cell with ; ( 2 ) other out of nodes are in the same cell with , while the remaining nodes are not in this cell ; ( 3 ) contends for the wireless channel access successfully .thus we have and m. n. tehrani , m. uysal , and h. yanikomeroglu , `` device - to - device communication in 5 g cellular networks : challenges , solutions , and future directions , '' _ ieee commun . mag ._ , vol .52 , no . 5 ,pp . 8692 , 2014 .j. andrews , s. shakkottai , r. heath , n. jindal , m. haenggi , r. berry , d. guo , m. neely , s. weber , s. jafar , and a. yener , `` rethinking information theory for mobile ad hoc networks , '' _ ieee commun . mag ._ , vol .46 , no . 12 , pp . 94101 , 2008 .a. goldsmith , m. effros , r. koetter , m. medard , and l. zheng , `` beyond shannon : the quest for fundamental performance limits of wireless ad hoc networks , '' _ ieee commun ._ , vol .49 , no . 5 , pp . 195205 , 2011 .j. liu , m. sheng , y. xu , j. li , and x. jiang , `` end - to - end delay modeling in buffer - limited manets : a general theoretical framework , '' _ ieee trans .wireless commun ._ , vol . 15 , no . 1 ,pp . 498511 , 2016 .j. liu and y. xu , `` c++ simulator : performance modeling for manets under general limited buffer constraint , '' [ online ] .available : https://www.researchgate.net/profile/jia_liu100 , 2015 .doi : 10.13140/rg.2.1.1266.8248 .
understanding the real achievable performance of mobile ad hoc networks ( manets ) under practical network constraints is of great importance for their applications in future highly heterogeneous wireless network environments . this paper explores , for the first time , the performance modeling for manets under a general limited buffer constraint , where each network node maintains a limited source buffer of size to store its locally generated packets and also a limited shared relay buffer of size to store relay packets for other nodes . based on the queuing theory and birth - death chain theory , we first develop a general theoretical framework to fully depict the source / relay buffer occupancy process in such a manet , which applies to any distributed mac protocol and any mobility model that leads to the uniform distribution of nodes locations in steady state . with the help of this framework , we then derive the exact expressions of several key network performance metrics , including achievable throughput , throughput capacity , and expected end - to - end delay . we further conduct case studies under two network scenarios and provide the corresponding theoretical / simulation results to demonstrate the application as well as the efficiency of our theoretical framework . finally , we present extensive numerical results to illustrate the impacts of buffer constraint on the performance of a buffer - limited manet . mobile ad hoc networks , buffer constraint , throughput , delay , performance modeling .
generally , one of the purpose of inverse scattering problem is to identify locations of small electromagnetic inhomogeneities from measured scattered field or far - field pattern .this problem is known as a difficult problem due to the its nonlinearity and ill - posedness but still interesting and challengeable problem because this arises in mathematics , physics , medical imaging , engineering sciences , etc , highly related to the modern life .related works can be found in and references therein . motivated this ,various algorithms for solving inverse scattering problem have been developed .following various researches , most of which are based on the least - square method so , for guarantee a successful performance , _ a priori _ information of unknown inhomogeneities , appropriate regularization terms highly depends on the specific problems , calculation of complex frchet ( or domain ) derivative must be considered beforehand .if any one of these conditions is not fulfilled , serious problems such as non - convergence , the local minimizer problem , and a considerable increase in the computational costs due to the large number of iteration procedures will arise .for an alternative , fast identification algorithms have been developed . among them , single- and multi - frequency kirchhoff and subspace migration have shown their feasibilities in detection of small inhomogeneities for full- and limited - view inverse scattering problems , refer to .however , exact value of the applied frequency must be known in order to detect locations of inhomogeneities accurately . if not, it would only be possible to recognize the existence of inhomogeneities , i.e , .identification of exact locations of inhomogeneities is impossible . throughout various simulation results ,this fact has been examined ( see ) and recently , related mathematical theory of multiple signal classification ( music ) algorithm for detecting small electromagnetic inhomogeneities has been concerned ( see ) ; however , reliable mathematical theory has not yet been developed satisfactorily . in this paper, we carefully analyze subspace migration imaging function with inaccurate frequency by establishing a relationship with bessel functions of order zero and one of the first kind .this is based on the asymptotic expansion formula in the presence of a set of electromagnetic inhomogeneities with small diameter and the structure of singular vectors associate with the nonzero singular values of the so - called multi - static response ( msr ) matrix collected from the far - field pattern .the identified relationship explains why the subspace migration yields inaccurate locations of small inhomogeneities with inaccurate frequency .remaining parts of this paper is organized as follows . in section [ sec:2 ], we introduce the direct scattering problem , far - field pattern , and asymptotic expansion formula . in section [ sec:3 ] , the subspace migration algorithm for detection of small inhomogeneitiesis surveyed . in section [ sec:4 ], we establish a relationship between subspace migration imaging function and bessel functions , and investigate the cause of inaccurate results . in section [ sec:5 ] , the results of the numerical simulations are exhibited in support of our analysis .a short conclusion follows in section [ sec:6 ] .let be a homogeneous inclusion with a small diameter in the two - dimensional space . throughout this paper , we assume totally different small inhomogeneities exist in such that where denote the location of and is a simply connected smooth domain containing the origin . for the sake , we assume that is a unit circle and are separated from each other .let be a given positive angular frequency with denotes given wavelength. throughout this paper is sufficiently large enough and satisfying following condition for all with . throughout this paper ,we denote and be the dielectric permittivity and magnetic permeability of , respectively .similarly , we let and be those of . for simplicity ,let be the collection of , and we define the following piecewise constants : in this paper , we assume that , , and for the sake of simplicity . at a given positive angular frequency ,let be the time - harmonic total field that satisfies the helmholtz equation with transmission conditions on for all .let be the solution of ( [ helmholtzequation ] ) without . in this paper , we consider the following plane - wave illumination : for a vector , .here , denotes a two - dimensional unit circle and throughout this paper , we assume that the set spans .generally , the total field can be divided into the incident field and the unknown scattered field , which satisfies the sommerfeld radiation condition uniformly in all directions .notice that we assumed so , we set from now on .as given in , can be written as the following asymptotic expansion formula in terms of where is uniform in and , is a symmetric matrix defined as \mbox{area}(\mathbf{b}_m),\end{aligned}\ ] ] and is the two - dimensional time harmonic green function ( or fundamental solution to helmholtz equation ) here , is the hankel function of order zero and of the first kind .the far - field pattern is defined as function that satisfies as uniformly on .subspace migration algorithm for identifying locations of small defects introduced in used the structure of a singular vector of the multi - static response ( msr ) matrix {j , l=1}^{n}=[u_\infty({\boldsymbol{\vartheta}}_j,{\boldsymbol{\theta}}_l)]_{j , l=1}^{n} ] , ^t ] , ^t ] .the incident directions are selected as ^t\quad\mbox{for}\quad l=1,2,\cdots , n.\ ] ] in every examples , a white gaussian noise with db signal - to - noise ratio ( snr ) is added via the matlab command ` awgn ` included in the signal processing package .[ result1 ] shows the maps of when msr matrix is generated with , , and , . as expected , although unexpected artifacts disturb identification , locations of are clearly identified for any value of .furthermore , on the basis of theorem [ theoremfrequency ] , since locations of are identified via , extracted locations of are scattered when and concentrated when .notice that since the true value of is , very accurate locations of are identified via the map of .maps of for ( top , left ) , ( middle , left ) , and ( bottom , right ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] maps of for ( top , left ) , ( middle , left ) , and ( bottom , right ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] + maps of for ( top , left ) , ( middle , left ) , and ( bottom , right ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] maps of for ( top , left ) , ( middle , left ) , and ( bottom , right ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] + maps of for ( top , left ) , ( middle , left ) , and ( bottom , right ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] maps of for ( top , left ) , ( middle , left ) , and ( bottom , right ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] fig .[ result2 ] shows the maps of when msr matrix is generated with , , and different material properties , , and .similar to the results in fig .[ result1 ] , we can recognize the existence of but huge amount of artifacts impede identification .note that since the true value of is , locations of can be identified accurately via the map of .however , we have no _ a priori _ information of true value of , locations of can not be identified exactly at this stage .maps of for ( top , left ) , ( middle , left ) , and ( bottom , left ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] maps of for ( top , left ) , ( middle , left ) , and ( bottom , left ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] + maps of for ( top , left ) , ( middle , left ) , and ( bottom , left ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] maps of for ( top , left ) , ( middle , left ) , and ( bottom , left ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] + maps of for ( top , left ) , ( middle , left ) , and ( bottom , left ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ] maps of for ( top , left ) , ( middle , left ) , and ( bottom , left ) when .white - colored circles in the maps of are true locations of ( right column).,title="fig:",scaledwidth=49.5% ]based on the asymptotic expansion formula of far - field pattern in the existence of small electromagnetic inhomogeneities , the structure of subspace migration imaging function is investigated when the applied frequency is inexact to the true one .based on the relationship with bessel functions of order zero and one of the first kind , we have confirmed the reason as to why the locations of inhomogeneities identified inaccurately . in this paper , we considered the detection of inhomogeneities in the full - view inverse scattering problem .based on the difficulties from , extension to the limited - view or half - space problem will be an remarkable research subject .furthermore , we expect it can be extended to the various inverse scattering problem in three - dimension .this research was supported by the basic science research program through the national research foundation of korea ( nrf ) , funded by the ministry of education(no .nrf-2014r1a1a2055225 ) , and the research program of kookmin university in korea .s. r. arridge , inverse problems * 15 * , r41 ( 1999).1999. h. ammari , g. bao and j. flemming , siam j. appl .math . * 62 * 1369 ( 2002 ) .a. s. fokas , y. kurylev and v. marinakis , inverse problems * 20 * 1067 ( 2004 ) .y. t. kim , i. doh , b. ahn and k. y. kim , j. korean soc .nondestruc . test .* 35 * 128 ( 2015 ) . s .-h . son , h .- j .kim , k .- j .lee , j .- y .jeon and h .- d .choi , j. electromagn .sci . * 15 * 250 ( 2015 ) .h. ammari and h. kang , _ reconstruction of small inhomogeneities from boundary measurements _ ( springer - verlag , berlin , 2004 ) .g. bao and p. li , inverse problems * 20 * l1 ( 2004 ) .p. m. van den berg and r. e. kleinman , inverse problems * 11 * l5 ( 1995 ) . m. burger , inverse problems * 17 * 1327 ( 2001 ) .r. r. coifman , m. goldberg , t. hrycak , m. israel and v. rokhlin , waves random media * 9 * 441 ( 1999 ) . o. dorn and d. lesselier , inverse problems * 22 * r67 ( 2006 ) . s. gutman and m. v. klibanov , inverse problems * 10 * 573 ( 1994 ) . v. isakov and s. f. wu , inverse problems * 18 * 1147 ( 2002 ) .h. ammari , j. garnier , h. kang , w .- k .park and k. slna , siam j. appl . math . * 71 * 68 ( 2011 ) .w .- k . park , j. comput. phys . * 283 * 52 ( 2015 ) .park and d. lesselier , waves random complex media * 22 * 3 ( 2012 ) .lance , j. m. vissers and n. bom , ultrasonics * 26 * , 37 ( 1988 ) .r. solimene , g. ruvio , a. dellaversano , a. cuccaro , m. j. ammann and r. pierri , prog .b * 50 * 347 ( 2013 ) . wpark , inverse problems * 26 * 074008 ( 2010 ) .h. ammari , e. iakovleva and d. lesselier , multiscale model . simul . * 3 * 597 ( 2005 ) .
generally , in the application of subspace migration for detecting locations of small inhomogeneities , one begins reconstruction procedure with _ a priori _ information of applied frequency . however , mathematical theory of subspace migration has not been developed satisfactorily when applied frequency is unknown . in this paper , we identify mathematical structure of subspace migration imaging function for finding locations of small inhomogeneities in two - dimensional homogeneous space by establishing a relationship with bessel functions of integer order zero and one of the first kind . this expression indicates the reason behind the appearance of inaccurate locations . numerical simulations are performed to support our analysis . [ section ] [ thm]corollary [ thm]lemma [ thm]proposition [ thm]definition [ thm]remark
the public goods game ( pgg ) provides a classical example that describes the evolutionary dynamics of competing species or strategies in biological and social systems .usually this game is played by _cooperators _ , which create public goods at a cost to themselves , and _defectors _ , which enjoy the benefits but do not pay any cost . then cooperation extinguishes and public goods creation vanishes in the so called _ tragedy of the commons _ .however , the inclusion of a third non - participating strategy allows for a sequential dominance of cooperation , defection and abstention from the game .this latter behavior resembles the rock - paper - scissors game which has been found experimentally in the three competing strains of e. coli as well as in social groups with cooperators , defectors and volunteers .it has been shown that mutations among strategies could give rise to more complex dynamical behavior , like the emergence of self - sustained oscillations via a supercritical hopf bifurcation .moreover , spontaneous formation of complex patterns has been studied in spatially extended ecological systems .non - trivial spatiotemporal patterns of synchronized action and their evolutionary role were also reported . nevertheless , other aspects of complexity and the emergence of self - organization by means of synchronization and chimera states have not been investigated intensively in the context of evolutionary game theory .our study contributes to the acquisition of new findings towards this direction .chimera states are characterized by the coexistence of coherent and incoherent behavior in systems of coupled oscillators .they were initially reported for identical phase oscillators , where the nonlocal coupling was thought to be the source of this counter - intuitive phenomenon .however , they have been recently found in systems with global and purely local coupling .although , most works on chimera states consider simple network topologies ( see and references within ) , recently , they have been found in real networks , like the _ c.elegans _ neural connectome and the cat cerebral cortex .it has been suggested that chimera states may be related to bump states in neural systems , the phenomenon of unihemispheric sleep , or epileptic seizures . for finite systems chimera statesare known to be chaotic transients , which can be stabilized by various recently developed control schemes .the existence of chimera states has also been verified experimentally over the last years in various settings . herewe study the emergence of collective phenomena , and specifically chimera states , in a pgg with mutations which is organized on a ring network with nonlocal connections . in each node of the network - organized pgg the species can select among different strategies as determined by the replicator equation .they are allowed also to mutate into one another with a uniform mutation rate .moreover , the network connectivity structure defines a mutual influence among strategies across the network nodes .the latter process , under appropriate conditions , resembles the diffusion of species across the network .we show that the considered system exhibits synchronization and chimera states , and promotes , respectively , bursting oscillations of cooperation either globally or in regions separated by incoherent clusters .we assume a large well - mixed population of cooperators , defectors and destructive agents whose interactions are governed by a pgg . and , ( b ) a limit cycle ; and , ( c ) a limit cycle ; and , ( d ) a limit cycle approaching a heteroclinic orbit ; and . trajectories are projected into a simplex whose corners correspond to the dominance of cooperators ( c ) , defectors ( d ) or destructive agents ( j ) .other parameters are and . ] at each round of the game a group of individuals is randomly sampled : cooperators from this group pay a cost and create a benefit ( with ) which is distributed equally among all participants of the group .defectors receive their share from the benefits without paying any cost .destructive agents , without receiving any benefits , induce a damage into the game which is shared equally by cooperators and defectors .the fitnesses of the individuals in a pgg determine their evolutionary fate , and are calculated as the average payoff of each strategy after its participation in many interaction groups , which for large populations ( c.f . ) results in , [ eq : payoffs ] + \frac{r}{n } \frac{1-z^n}{1-z } - 1 \nonumber\\ & & -\ : d \left ( \frac{1-z^n}{1-z } - 1\right)\,,\label{eq : payoffa}\\ p_y & = & p_x + 1 - \frac{r}{n } \frac{1-z^n}{1-z}\,,\label{eq : payoffb}\\ p_z & = & 0 \,,\label{eq : payoffc}\end{aligned}\ ] ] for the mutation rate .a stable focus loses its stability via a supercritical hopf bifurcation ( red dot ) and becomes unstable giving rise to a limit cycle .( b ) continuation of the hopf point determines the curve which separates different dynamical regimes in the parameter space .other parameters are and . ] where , and are the fractions of cooperators , defectors and destructive agents ( or the relative frequencies of individuals playing each strategy ) , respectively ; is the group size and is the total damage that destructive agents inflict to the participants of the game . without loss of generality, we set the cost paid by the cooperators to unity , .as a consequence , the multiplicative factor now represents the benefit produced per cooperator in the group. the evolution of the three strategies can be studied by the replicator - mutation dynamics given by , [ eq : localdyn ] where is the average payoff of the population at a given time .obviously ; this allows to reduce the dimensionality of the phase space and analyze the dynamics of three strategies only by investigating and . in each equation , in addition to the replication term which accounts for the variation of the fractions of individuals due to the selection process ( first term on the right hand side ) , mutations are also included ( second term ) and represent random changes between the strategies at a rate .this system has one non - trivial and three trivial fixed points ( see figure [ fig : phasespace ] ) .the trivial fixed points are saddles and represent the dominance of cooperators ( ) , defectors ( ) or destructive agents ( ) . the non - trivial point ( gray dot )can behave as a stable focus ( see e.g. figure [ fig : phasespace](a ) ) that attracts all the trajectories or as an unstable focus ( see e.g. figure [ fig : phasespace](b)(d ) ) that repels the trajectories , which however , are confined within the heteroclinic cycle , hence they are attracted to a stable limit cycle . linear stability analysis has shown that a supercritical hopf bifurcation occurs for increasing or decreasing , beyond which self - sustained oscillations spontaneously emerge .figure [ fig : hopf](a ) shows the hopf bifurcation point ( red dot ) for a fixed mutation rate , while the continuation of the hopf point determines the curve which separates different dynamical regimes in the parameter space ( see figure [ fig : hopf](b ) ) . the amplitude andthe period of the limit cycles becomes larger as the parameters and lie further from the hopf point .here we consider a metapopulation of individuals which are organized on ring networks with nonlocal connections .each node of such networks is occupied by a large well - mixed population of individuals which interact internally according to a pgg as described above .in addition to the local interactions that is , replications and mutations the populations in each node take into account the strategies followed by the populations in their connected nodes . in the ring networks considered here ,the population size in the nodes is assumed to be constant .therefore , the overall process can be described by the following equations : [ eq : ringdyn ] where the summation terms account for the mutual influence of strategies between populations in connected nodes and characterizes the strength of this influence .taking into account eq .the latter process is equivalent to the diffusion of cooperators and defectors across the network ( c.f . ) . in general , an increasing coupling strength in the system results in synchronization of the metapopulation , where the fractions of cooperators , defectors and destructive agents in each node oscillate with the same phase and amplitude .however , the nonlocal topology of the ring network can induce non - trivial collective phenomena like chimera states . in the following ,we focus on the analysis of these states . as a measure indicating the existence of a chimera state we employ the _ mean phase velocity _ of each oscillator : where is the number of periods of the -th oscillator during a time interval .the typical profile of in the case of a chimera state is flat in the synchronous domains and arc - shaped in the incoherent ones .in addition to the mean phase velocity , we calculate the classification measures for chimera states developed recently by kemeth __ in . in particular , we employ the local curvature of the phases of the oscillators as a measure for the spatial coherence .the phase of each oscillator is defined as , where denote time averages . in the ring networks considered here ,we calculate the local curvature at each node by applying the discrete laplacian operator on each snapshot at time .this operator reads : where denotes the spatial distribution of the phases in one spatial dimension with periodic boundary conditions at time . for the nodes in the synchronous / coherent clusters it holds that , while for the nodes in the incoherent clusters , is finite and has pronounced fluctuations .the maximum value of corresponds to the local curvature of nodes whose two nearest neighbors have the maximum phase difference .the local curvature defined above allows for a clear representation and characterization of the obtained chimera states .figure [ fig : multiplot ] shows a typical chimera state emerging from the dynamics of our model : in ( a ) we see the space - time plot of the phase , while ( b ) and ( c ) show the corresponding mean phase velocity profile and a snapshot at a given time instance .figure [ fig : multiplot](d ) shows the space - time evolution of the spatial coherence index ( eq . [ eq : loc_curv ] ) and figure [ fig : multiplot](e ) illustrates a single time snapshot of the chimera state in the phase space .the gray dots correspond to the incoherent cluster , the red and orange segments refer to the coherent domains , and the solid line marks the orbit of the uncoupled unit. in the example of figure [ fig : multiplot ] , the observed chimera state has two ( in)coherent regions .the multiplicity of a chimera state ( number of synchronous clusters ) may be manipulated by varying the coupling range of each node .this results in the formation of multi - clustered ( or multi - headed ) chimeras reported in many systems . the effect of the coupling range is illustrated in figure [ fig : spacetime ] , where the space - time plots for phase and the corresponding mean phase velocity profiles are shown for three different values of .note that the coherent regions are always in antiphase , which explains also the even number of ( in)coherent clusters in the obtained chimeras .based on the local curvature we can measure the relative size of the spatially coherent ( i.e. synchronized ) clusters at each time step . for this purposewe consider the normalized probability function of , ; it equals in a non - synchronous system and in a fully synchronized one .any value of between and indicates coexistence of coherence and incoherence , i.e. a chimera state .the definition of spatial coherence or incoherence is not absolute , but depends on the maximum curvature of the system .therefore this index is defined with the threshold as : apart from the spatial coherence , we also calculate the temporal coherence as an indication for a chimera state , based on the pairwise correlation coefficients : where , are the time series of the phases of two oscillators in the nodes and , respectively .the normalized distribution function is a measure for the correlation in time and the percentage of the time - correlated oscillators is given by : where the coherent accuracy for correlated oscillators is .the influence of the coupling range on the spatial and temporal coherence of the observed dynamics is depicted in figure [ fig : g0h0 ] . ) and temporal ( ) coherence for the chimera states shown in figure [ fig : multiplot ] and figure [ fig : spacetime ] for ( a ) , ( b ) , ( c ) , ( d ) .other parameters are , , , , , and . ] both measures , and , are within the parameter range that ensures the existence of chimera states .as increases , so does the size of the coherent clusters , which is reflected by the increasing values of and .moreover , in all cases is fixed in time and fluctuates slightly around a constant value ( this effect diminishes for larger ) ; therefore , the chimera states are _ stationary _ and _ static _ according to the classification scheme of .the above analysis elucidates that the replicator - mutator dynamics of the pgg organized on ring networks with nonlocal coupling support either synchronization or chimera states , whose features depend on parameters determining dynamical and topological properties . in the following , a detailed analysis of this dependence will be presented by focusing on two parameters , the damage and the coupling range . for our analysiswe take into account that the populations in the nodes of coherent and incoherent domains oscillate with mean phase velocities and , respectively .the faster populations in the incoherent domain oscillate with .therefore , by looking at the difference one can ensure that chimera states exist when is larger than a certain threshold .extensive numerical simulations have revealed that a small change in the parameter can cause suddenly an abrupt , first order transition between synchronized and chimera states , which is characterized by a hysteresis loop ( see figure [ fig : hysteresis_d](a ) orange colored area ) . starting from an initial configuration of a chimera state with four ( in)coherent clusters we perform numerical simulations ( continuation ) by increasing and then decreasing slowly the damage for fixed coupling range .figure [ fig : hysteresis_d](b ) shows that a gradual increase of the damage ( which shifts the system further from the hopf bifurcation ) changes slightly the position and the size of the incoherent clusters up to a critical value for which an abrupt transition occurs suddenly and brings the system to a synchronized state where it remains thereafter .figure [ fig : hysteresis_d](c ) shows an opposite ( but qualitatively similar ) scenario : decreasing the damage of the game gives rise to an abrupt transition which brings the system back to a chimera state with four ( in)coherent clusters .however , this second transition takes place at a different value of , resulting in the observed hysteresis loop ( c.f . , ) .starting from the same initial configuration as above , we now perform numerical continuation by decreasing and then increasing the coupling range for fixed damage .figure [ fig : hysteresis_r](a ) shows that an abrupt transition from a chimera to a synchronized state and back occurs suddenly and is characterized by a hysteresis loop . like in the case of varying , there is a window of values for the coupling range ( orange colored area ) where for the same topology ( i.e. same ) the system can either be self - organized into a chimera state with four ( in)coherent clusters or be synchronized , depending on the initial conditions .figures [ fig : hysteresis_r](b ) and ( c ) illustrate the mean phase velocity of each population as a function of .this allows to discriminate the existence of ( in)coherent clusters ( i.e. existence of chimera states ) , their position and their size , for both directions of the continuation .numerical continuation between different limits for or has revealed that , in general , different initial configurations give rise to various transitions between synchronization and chimera states .interestingly , transitions between chimera states with different number of ( in)coherent clusters were also found ( see supporting information ) .for the first time we report on the existence of synchronization and chimera states in ring networks with nonlocal coupling obeying the replicator - mutator dynamics of a pgg with cooperators , defectors and destructive agents .our findings reflect the tendency of metapopulations to evolve collectively in a coherent way or be fragmented in clusters of synchronous and incoherent behavior .the transition between these steady states occurs through an abrupt first order transition .a systematic numerical analysis has revealed that chimera states are stationary and static , while the number of ( in)coherent clusters varies depending on the coupling range , and on the parameters that determine the local dynamics . interestingly ,the first order transitions which shift the system between steady states are characterized by strong hysteresis loops , where multistability is observed . in the hysteresis loop , depending on the initial conditions , either global synchronization or chimeras with varying number of ( in)coherent clusters are achieved .our study provides for a new framework for the analysis of spontaneously emergent spatiotemporal phenomena in game theory , and particularly their effect on the cooperation - defection - destruction cyclic dynamics triggered by damaging individuals .since synchronized or incoherent actions can influence cooperation and the efficiency of groups , the appearance of the chimera states , in which the cyclic dynamics is accelerated , may have a relevant impact on such public goods creation and , hence , on the speed of evolution and innovation .therefore , the stylized model presented here , may be adapted and completed to find applications in biological , social or economic systems .as an example , the results found here can support the design of feedback schemes which , by promoting modifications in the strategy ( dynamics ) or in the connectivity structure ( topology ) , control the collective global or clustered behavior of metapopulations in order to , for instance , diminish long destructive periods or enhance innovation , as well as on biological synthetic systems , where chimera states may speed up reaction processess and evolution .n.e.k . , r.j.r . andacknowledge financial support by the lasagne ( contract no.318132 ) eu - fp7 project .n.e.k . anda.d .- g . also acknowledge financial support by the multiplex ( contract no.317532 ) eu - fp7 project , the mineco ( projects fis2012 - 38266 and fis2015 - 71582 ) , and the generalitat de catalunya ( project 2014sgr-608 ) . j.h .acknowledge financial support by the siemens research program on `` establishing a multidisciplinary and effective innovation and entrepreneurship hub '' .
we found that a network - organized metapopulation of cooperators , defectors and destructive agents playing the public goods game with mutations , can collectively reach global synchronization or chimera states . global synchronization is accompanied by a collective periodic burst of cooperation , whereas chimera states reflect the tendency of the networked metapopulation to be fragmented in clusters of synchronous and incoherent bursts of cooperation . numerical simulations have shown that the system s dynamics alternates between these two steady states through a first order transition . depending on the parameters determining the dynamical and topological properties , chimera states with different numbers of coherent and incoherent clusters are observed . our results present the first systematic study of chimera states and their characterization in the context of evolutionary game theory . this provides a valuable insight into the details of their occurrence , extending the relevance of such states to natural and social systems .
the system , as in , , and , consists of relays assisting the transmitter and the receiver in the half - duplex mode , i.e. in each time , the relays can either transmit or receive .the channels between each two node is assumed to be quasi - static flat rayleigh - fading , i.e. the channel gains remain constant during a block of transmission and changes independently from one block to another .however , we assume that there is no direct link between the transmitter and the receiver .this assumption is reasonable when the transmitter and the receiver are far from each other or when the receiver is supposed to have connection with just the relay nodes to avoid the complexity of the network . as in and ,each node is assumed to know the state of its backward channel and , moreover , the receiver is supposed to know the equivalent channel gain from the transmitter to the receiver .no feedback to the transmitting node is permitted .all nodes have the same power constraint .also , we assume that a capacity achieving gaussian random codebook can be generated at each node of the network .hence , the code design problem is not considered in this paper .in the proposed scheme , the entire block of transmission is divided into sub - blocks .each sub - block consists of slots .each slot has symbols .hence , the entire block consists of symbols . in order to transmit a message , the transmitter selects the corresponding codeword of a gaussian random codebook consisting of codewords of length and transmits the codeword during the first slots . in each sub - block, each relay receives the signal in one of the slots and transmits the received signal in the next slot .so , each relay is off in of time .more precisely , in the slot of the sub - block ( ) , the relay receives the signals the transmitter is sending , and amplifies and forwards it to the receiver in the next slot .the receiver starts receiving the signal from the second slot . after receiving the last slot ( slot ) signal , the receiver decodes the transmitted message by using the signal of slot received from relays .it will be shown in the next section that the equivalent point - to - point channel from the transmitter to the receiver would act as a lower - triangular mimo channel .in this section , we show that the proposed method achieves the optimum achievable diversity - multiplexing curve . first , according to the cut - set bound theorem , the point - to - point capacity of the uplink channel ( the channel from the transmitter to the relays ) is an upper - bound for the capacity of this system .accordingly , the diversity - multiplexing curve of a simo system which is a straight line from multiplexing gain to the diversity gain is an upper - bound for the diversity - multiplexing curve of our system . in this section, we prove that the tradeoff curve of the proposed method achieves the upper - bound and thus , it is optimum .first , we prove the statement for the case that there is no link between the relays .next , we prove the statement for the general case .assume , the link gain between the relay and the transmitter and the relay and the receiver are and , respectively .furthermore , assume that there is no link between the relays .accordingly , at the relay we have where is the received signal vector of the relay , is the transmitter signal vector and is the noise vector of the channel . at the receiver side, we have where is the transmitted signal vector of the relay , is the received signal vector at the receiver side and is the noise vector of the downlink channel .the output power constraint holds at the transmitter and relays side . to obtain the dm tradeoff curve of the proposed scheme, we are looking for the end - to - end probability of outage from the rate , as goes to infinity .assume a half - duplex parallel relay scenario with no interfering relays .the proposed sm scheme achieves the diversity gain which achieves the optimum achievable dm tradeoff curve as .let us define as the signal / noise transmitted / received by the transmitter / relay / receiver to the relay / receiver in the slot of the sub - block .also , let us define and .thus , we have where is the amplification coefficient performed in the relay . defining the event as the event of outage from the rate in the sub - channel consisting of the transmitter , the relay , and the receiver , we have \leq r\log(p)\right\ } \nonumber \\ & \doteq & \min \left\ { \mbox{sign}(r ) , \mathbb{p}\left\{|g_k|^2|{\alpha}_k|^2|h_k|^2\left(1+|g_k|^2|{\alpha}_k|^2\right)^{-1 } \leq p^{r-1 } \right\ } \right\ } \nonumber \\ & \stackrel{(a)}{\doteq } & \min \left\ { \mbox{sign}(r ) , \mathbb{p}\left\{|g_k|^2|{\alpha}_k|^2|h_k|^2 \min \left\{\frac{1}{2 } , \frac{1}{2|g_k|^2|{\alpha}_k|^2 } \right\ } \leq p^{r-1 } \right\ } \right\ } \nonumber \\ & \stackrel{(b)}{\doteq } & \min \left\{\mbox{sign}(r ) , \mathbb{p } \left\{|h_k|^2 \leq 2p^{r-1 } \right\ } + \mathbb{p } \left\ { p^{r-1 } \right\ } \right\ } \nonumber \\ & \stackrel{(c)}{\doteq } & \min \left\ { \mbox{sign}(r ) , p^{-(1-r ) } + \mathbb{p } \left\ { |g_k|^2\min \left\ { \frac{1}{2 } , \frac{|h_k|^2p}{2 } \right\ } \leq 2 p^{r-1 } \right\ } \right\ } \nonumber \\ & \stackrel{(d)}{\doteq } & \min \left\ { \mbox{sign}(r ) , p^{-(1-r ) } + \mathbb{p } \left\ { |g_k|^2 \leq 4 p^{r-1 } \right\ } + \mathbb{p } \left\ { |g_k|^2|h_k|^2 \leq 4 p^{r-2 } \right\ } \right\ } \nonumber\\ & \stackrel{(e)}{\doteq } & \min \left\ { \mbox{sign}(r ) , p^{-(1-r ) } \right\ } , \label{eq : sbch_ni}\end{aligned}\ ] ] where is the sign function , i.e. . here ,( a ) follows from the fact that , ( b ) and ( d ) follow from the union bound inequality , ( c ) follows from the fact that and the pdf distribution of the rayleigh - fading parameter near zero , and ( e ) follows from the fact that the product of two independent rayleigh - fading parameters behave as a rayleigh - fading parameter near zero .( [ eq : sbch_ni ] ) shows that each sub - channel s tradeoff curve performs as a single - antenna point - to - point channel . defining the random variable showing the rate of the sub - channel consisting of the transmitter , the relay , and the receiver in terms of , the outage event of the entire channel from the , the event , is equal to assuming , we have is known by ( [ eq : sbch_ni ] ) . defining the region as it is easy to check that all the vectors that result in the outage event almost surely lie in .in fact , according to ( [ eq : sbch_ni ] ) , for all we know . also , for , which is exponential in terms of .hence , can be disregarded for the outage region . as a result , . on the other hand , by ( [ eq : sbch_ni ] ) and the fact that s are independent, we have now , we show that .first of all , by taking derivative of ( [ eq : cdf_ni ] ) with respect to , it is easy to see that the probability density function of behaves the same as the probability function in ( [ eq : cdf_ni ] ) , i.e. .hence , the outage probability is equal to here , ( a ) follows from the fact that is a fixed bounded region whose volume is independent of . on the other hand , by continuity of over , we have which combining with ( [ eq : r_ub_ni ] ) , results into . defining , we have to solve the following linear programming optimization problem .notice that the region is defined by a set of linear inequality constraints . to solve the problem , we have here , ( a ) follows from the inequality constraint in ( [ eq : r_df_ni ] ) governing , and ( b ) follows from the fact that and .now , we partition the range into three intervals .first , in the case that , the feasible point achieves the lower bound .second , in the case that , the feasible point , achieves the lower bound . finally , in the case that , the lower bound is achievable by the feasible point .hence , we have .this completes the proof ._ remark - _ it is worth noting that as long as the graph whose vertices are the relay nodes and edges are the non interfering relay node pairs includes a hamiltonian cycle , that goes exactly one time through each vertex of the graph . ]the result of this subsection remains valid . in the general case , an interference term due to the neighboring relay adds at the receiver antenna of each relay . where is the interference link gain between the and relays .hence , the amplification coefficient is bounded as . here , we observe that in the case that , the noise at the receiving side of the relay can be boosted at the receiving side of the next relay . hence , we bound the amplification coefficient as . in this way, it is guaranteed that the noise of relays are not boosted up through the system .this is at the expense working with the output power less than .on the other hand , we know that almost surely , for all values of .] .hence , almost surely we have . another change we make in this partis that we assume that the entire time of transmission consists of slots , and the transmitter sends the data during the first slots while the relays send in the last slots ( from the second slot up to the slot ) .hence , we have .this assumption makes our analysis easier and the lower bound on the diversity curve tighter .now , we prove the main theorem of this section .consider a half - duplex multiple relays scenario with interfering relays whose gains are independent rayleigh fading variables .the proposed sm scheme achieves the diversity gain which achieves the optimum achievable dm tradeoff curve as .first , we show that the entire channel matrix acts as a lower triangular matrix . at the receiver side ,we have here , has the following recursive formula .defining the square matrices as , , , and where is the kronecker product of matrices and is the identity matrix , and the vectors ^t ] , ^t ] , we have here , we observe that the matrix of the entire channel acts as a lower triangular matrix of a mimo channel whose noise is colored .the probability of outage of such a channel for the multiplexing gain is defined as where , and .assume , , , and as the region in that defines the outage event in terms of the vector ] .the probability distribution function ( and also the inverse of cumulative distribution function ) decays exponentially as for positive values of .hence , the outage region is almost surely equal to .now , we have here , ( a ) follows from the fact that for a positive semidefinite matrix we have , ( b ) follows from the fact that and assuming is large enough such that , and ( c ) follows from the fact that and accordingly , , and knowing that the sum of the entries of each row in is less than , we have which has the property that for every , is positive semidefinite . ] , and , and conditioned on , we have and and consecutively .on the other hand , we know for vectors , we have .similarly to the proof of theorem 1 , by taking derivative with respect to we have .defining the lower bound as , the new region as , the cube as ^{2k}$ ] , and for , , we observe \in \mathcal{\hat{r } } \cap \mathcal{i}_i^c \right\ } } \nonumber \\ & \dot{\leq } & vol ( \mathcal{\hat{r } } \cap \mathcal{i } ) p^{-\min_{\left [ \mathbf{\mu}^0 , \mathbf{\nu}^0 \right ] \in \mathcal{\hat{r } } \bigcap \mathcal{i } } \mathbf{1 } \cdot \left ( \mathbf{\mu}^0 + \mathbf{\nu}^0 \right ) } + 2k p^{-kl_0 } \nonumber \\ & \stackrel{(b)}{\doteq } & p^{-kl_0 } \nonumber \\ & \doteq & p^{-\left[k \left ( 1 - r \right ) - \frac{r}{n } \right]}. \label{eq : t2_r_wi}\end{aligned}\ ] ] here , ( a ) follows from ( [ eq : r_hat_wi ] ) and ( b ) follows from the fact that is a bounded region whose volume is independent of .( [ eq : t2_r_wi ] ) completes the proof . _remark - _ the statement in the above theorem holds for the general case in which any arbitrary set of relay pairs are non - interfering .hence , the proposed scheme achieves the upper - bound of the tradeoff curve in the asymptotic case of for any graph topology on the interfering relay pairs .
in this paper , a multiple - relay network in considered , in which single - antenna relays assist a single - antenna transmitter to communicate with a single - antenna receiver in a half - duplex mode . a new amplify and forward ( af ) scheme is proposed for this network and is shown to achieve the optimum diversity - multiplexing trade - off curve .
at present , most questions about how things work in biological systems are answered by experimental exploration .the situation in physics is very different , where theory and experiment are more equal partners . almost from the moment that biology and physics became separate sciences , physicists have hoped that we could reach an understanding of life that parallels our understanding of the inanimate world .although there have been several waves of enthusiasm , each with its own successes , such hopes often have seemed quite fanciful . today , as more of the living world becomes susceptible to quantitative experiments , old dreams are being rekindled .the increasing body of quantitative data calls out for analysis , sometimes quite desperately , and creates opportunities to make mathematical models for particular biological systems .indeed , the notion of `` modeling '' as part of a modern , quantitative biology is becoming conventional .but theoretical physics is not a collection of disparate models for particular systems , or a catalogue of special cases .there is a growing community of theorists who want , as it were , more out of life .we want a theoretical physics of biological systems that reaches the level of predictive power that has become the standard in other areas of physics .we want to reconcile the physicists desire for concise , unifying theoretical principles with the obvious complexity and diversity of life .we want theories that engage meaningfully with the myriad experimental details of particular systems , yet still are derivable from principles that transcend these details .the existence of a community of optimists does not imply that our optimism is justified .the goal of this essay is to explain why at least one theorist ( me ) is optimistic .i hope to convince you that theory has had important successes , shaping how we think about life today , and that this is true despite a widespread impression to the contrary . turning from the past to the present and future, i will argue this is an auspicious time : theory is having a real impact on experiment , related theoretical ideas are emerging in very different biological contexts , and we can see hints of ideas that have the power to unify and deepen our understanding of diverse phenomena . what is emerging from our community goes beyond the `` application '' of physics to the problems of biology .we are asking physicists questions about the phenomena of life , looking for the kinds of compelling answers that we expect in the traditional core of physics .any effort to justify optimism must be addressed not just to the agnostic , or to the converted , but to the actively skeptical .many biologists believe that we just `` do nt know enough '' to theorize about the particular systems that they study , and have a hard time pointing to examples where approaches grounded in mathematical thinking have illuminated the workings of these systems . for many physicists ,the phenomena of life still look too messy to be accessible , and they doubt if there is anything very fundamental to be said , or if digging into the phenomena of life just means sifting through a mass of detail . my goal in this essay is to respond to these concerns directly .i hope to convince you that the pessimistic biologists are wrong about the history , and that the pessimistic physicists are wrong about the current state of the field .in general , it seems best to let the work of the community speak for itself , and provide its own justification for our optimism , rather than making pronouncements about what anyone else should be doing or thinking .but , in 2014 , the simons foundation convened the first of what is now an annual series of workshops on _ theory in biology _ , and i was given the task of providing some perspectives .this led me to think more explicitly about the grounds for my own optimism , and about the history of theory in our field ; this essay grew out of that short lecture .it came at the end of a long day and so , perhaps ironically , it was more descriptive than mathematical .in the late nineteenth century , continuing through the early 1900s , many of the great figures of classical physics routinely crossed the boundaries between subjects that we now distinguish as physics , chemistry , biology , and even psychology .in particular , lord rayleigh had an interest in hearing , which he viewed as an extension of his interests in the theory of sound . in a paper from 1907entitled `` on our perception of sound direction , '' rayleigh developed ideas that are probably familiar even if you do nt know their origin .for sounds at high frequencies , the wavelength is shorter than the width of your head and thus your head casts the acoustic equivalent of a shadow .so if sound is coming from your right , it is more intense in your right ear than in your left , and there are plenty of direct experiments to show that this is indeed how you localize high frequency sounds you pick up on the intensity difference between your two ears .what rayleigh understood was that if you go to low frequencies this does nt work : the wavelength becomes longer than the size of your head , and hence your head no longer casts a shadow .the only remaining clue to the location of the sound source is then the timing or phase difference between your ears .of course there s another possibility , which is that you ca nt actually localize low frequency sounds , so rayleigh had to check this , and he went on to devise experiments that tested directly whether you could hear the phase or time differences . at this point in history, there was a predominant assumption that we are phase deaf that we can hear the intensities of the component notes of a sound , but not their phases .but the physics of the situation tells us that if we re going to localize sound at low frequencies then we _ must _ hear phase differences , so there s an immediate qualitative prediction , and this was confirmed .rayleigh phrased his conclusions poetically but accurately : `` it seems no longer possible to hold that the vibratory character of sound terminates at the outer ends of the nerves along which the communication with the brain is established . on the contrary , the processes in the nerve must themselves be vibratory , not of course in the gross mechanical sense , but with preservation of the period and retaining the characteristic of phase a view advocated by rutherford , in opposition to helmholtz , as long ago as 1886 . '' in modern language , action potentials in primary auditory neurons must `` phase lock '' to the temporal fine structure of the acoustic waveform . if we push beyond these qualitative arguments , we find a surprising quantitative conclusion .the smallest difference in source direction that we can discriminate , using low frequency tones , corresponds to a difference in time between our two ears of only a few microseconds and if you were a barn owl it would only be one microsecond .this is even more startling since the characteristic time for everything to happen in the nervous system is a millisecond , not a microsecond .it was more than 50 years before anyone recorded from a neuron that actually implemented these timing comparisons , and it took even longer to demonstrate that precision really is in the microsecond range .while one example does nt make a rule , we can try to identify a strategy at work in this example .rayleigh started with a few facts about biology , and added a few basic physical principles .thinking hard about how these connect ( or conflict ) , he arrived at a theory .this theory made qualitative predictions , and provided a new framework for quantitative discussion .this framework in turn yielded startling results , and set in motion a sequence of experiments that played out over many decades .figure [ watson+crick ] shows another example , one perhaps more familiar to most of you .none of the papers cited here have original data , and so , by that criterion , these certainly are theoretical papers .i think it s deeper than that .these papers follow the pattern that i just suggested to you based on rayleigh s work : start with a small number of biological facts , add some basic physical principles , and mix carefully .as you know , watson and crick were trying to build models for the molecular structure of dna , and in such an effort it is crucial that there are _ rules _ of chemical bonding , not suggestions about chemical bonding .so there are real things , quantitative principles from physics and chemistry , on which one can rely . and , in trying to fit all these things together , there appear not to be that many solutions .you know what happened next .the first paper by watson and crick ends with the cryptic remark about how it has not escaped their attention that the structure they propose has implications , and in the second paper those implications are worked out .the intellectual shockwaves which propagated outward from refs are so well known that they do nt require a review here , but it s important to look back at what really got said and done in these papers .let me note , in particular , the passage `` ... any sequence of the pairs of bases can fit into the structure .it follows that in a long molecule many different permutations are possible , and it therefore seems likely that the precise sequence of the bases is the code which carries the genetical information . ...one chain is , as it were , the complement of the other , and it is this feature which suggests how the deoxyribonucleic acid might duplicate itself . '' .it is crucial to appreciate that these theoretical predictions are _ not _ consequences of experimental observations .as is well known , parallel to the model building efforts of watson and crick in cambridge , x ray diffraction experiments on dna were being done by franklin , wilkins , and colleagues in london .franklin s famous photograph fifty one , which appears in ref , provided qualitative evidence for a helical structure , and made it possible to read off the basic dimensions of the helix ; these results were quickly clarified in a sequence of papers from franklin and gosling .but , nearly a decade later , x ray diffraction data still were not of high enough resolution to `` see '' the pattern of complementary base pairing without relying on models to help interpret the data .langridge et al provide a very clear discussion of how the data in 1960 were sufficient to test a proposed structure , but not sufficient to determine the structure directly .so , in 1953 , base pairing was a theory . andno amount of structural information alone would be sufficient to conclude that `` the sequence of bases is the code which carries genetical information . ''that was a theory too .the idea that the sequence of bases forms a code defines the problem of deciphering this code , and this attracted attention from many theorists ; a highlight from this period is crick s 1958 paper , with excerpts in fig [ watson+crick ] .this is , i think , the paper in which the ideas and phrasing that i have emphasized in the figure appear for the first time .we still use the words `` central dogma , '' but by now the `` sequence hypothesis '' is so fully internalized that we do nt even give it a name .but , as the text states quite explicitly , there was no direct evidence for either proposal .and i invite you to note some of the language that crick uses , again to emphasize the theoretical character of what was going on : i tried to build explanations that did nt use these ideas and i could nt . in the decade or so between the proposal of the double helix and the working out of the genetic code , many theorists proposed coding schemes that were quite interesting mathematically , but we now know that none of these proposals was the one chosen by nature .further , by the time the experiments which mapped the code were coming to fruition , nobody doubted that there is a genetic code that is , the `` sequence hypothesis '' had become obvious , and the problem was to work out what the sequences meant .the combination of these two facts obscures the essentially theoretical foundations of the subject .as far as i know , all of the experiments that discovered the key features of the genetic code were designed with the theoretical ideas of ref in mind .there still are people who ask whether theory will someday , in the distant future , make a contribution to biology .thus it is essential to point out that theory already has made contributions , and big ones at that .many of the foundational papers in what we now call molecular biology were unambiguously theoretical papers , and the example of rayleigh points to a theoretical tradition that reaches much farther back into the history of interactions between physics and biology .but these examples also have problems .first , in the case of watson and crick , it appears that all the theorizing was in words and not in equations , and so what s written in these papers does nt look like theory in the sense that we use the term in physics .i m not sure that s really fair , because when they went to build a molecular model , the bonds come in particular lengths , neighboring bonds adopt particular angles , and these numbers actually matter .thus , there were equations , but they were embedded in this structural knowledge .still , you might worry .second , this was theorizing in which the relevant principles were at the level of molecular structure .this is a level at which , i think , nobody would doubt that physical principles are relevant for biology . butit is nt clear how you would ever get from that level up to the level that concerns many of us today , the level of `` systems , '' whether we mean systems inside one cell , in a developing embryo , in a network of neurons in the brain , or in a group of organisms behaving cooperatively . at the opposite extreme ,the physical principles to which rayleigh appealed were completely outside the organism too macroscopic to help us with most of what we re trying to do today , while what watson and crick were doing was too microscopic . thus , while these examples tell us that theorizing in this spirit can be incredibly powerful , the kind of theories that these guys were building does nt match what we d like to do today .finally , there is a question about the connection between theory and experiment . by the time of rayleigh s work, there was a well established tradition of trying to make quantitative connections between our perceptions and the properties of the physical signals at the input to our sense organs ; this subject of `` psychophysics '' would grow and deepen throughout the twentieth century .the fundamental prediction made by watson and crick was about the structure of a molecule , and the decades following their work would see the emergence of x ray diffraction experiments with atomic resolution , even in large biological structures .thus , in both our examples , the theory pointed toward experiments that could be done quantitatively , indeed with methods that are not so far from the traditional core of experimental physics .is this the norm , or an exception ? in seminarsone often hears words to the effect that `` the agreement between theory and experiment is nt perfect , but , well , you know , it s biology . '' as an excuse for a little scatter around predictions this might be acceptable , although i find it a bit annoying .but in these excuses i sometimes detect an implicit claim that there s more going on , that there is something fundamental about biology that prevents us from having the kind of tdetailed , quantitative comparison between theory and experiment that we are used to in the physical sciences .this is not about describing things at the second decimal place ; the worry is rather that there might be some irreducible sloppiness that we ll never get our arms around , and that this could spell doom for the physicist s dreams .i am surprised by how many physicists simply accept the claim that biology is a messy business .as explained at ( perhaps too much ) length in ref , one s views on these matters depend on how you are introduced to biology .if your first exposure is to very complex systems where it is difficult both to maintain control and to make quantitative measurements , then the search for precision can seem hopeless .but if you start , instead , by studying the ability of the visual system to count single photons , and realize that in the receptor cells of the retina there are changes in the concentration of internal messengers which are biologically meaningful , you have a different view . to summarize ,the classical examples are inspiring , but the challenge for theory in our time is ( at least ) three fold .first , we have to identify principles that organize our thinking at a systems level .second , we have to express these principles in mathematical terms .third , if we expect our mathematical theories to make quantitative predictions , we have to push our experimentalist friends to expand the range of life s phenomena that are accessible to correspondingly quantitative measurements .as a first step in addressing the three problems i have just raised , let me try another classic example , which then moves toward the problems that we want to do today ; the classical piece is the work of hodgkin and huxley .they showed that the electrical dynamics of a neuron that is , the voltage across the membrane , as a function of space and time are determined by the dynamics of what we now call ion channel molecules in the cell membrane .these are proteins , and hodgkin and huxley described the kinetics with which these proteins switch among different states .this switching depends on the voltage across the membrane , some of the states are open and allow ionic current to flow , others are closed and do not ; when you put all of this together , you end up with a coupled system of nonlinear equations for both the states of the channels and for the voltage itself .these are the hodgkin huxley equations .hodgkin and huxley studied the squid giant axon , which really is giant , the size of a small drinking straw .in particular you can pass a wire down the middle of it and short circuit things so that the voltage all across the membrane is uniform , isolating the dynamics of ions flowing across the membrane .once you characterize the dynamics of this `` space clamped '' axon , you can add back the flow of current along the axon , because this just involves the conductivity of the ionic solution , nothing fancy about the membrane .the resulting equations predict that signals converge onto stereotyped pulses that propagate with a definite velocity .these pulses are the action potentials or `` spikes '' that are the nearly universal mechanism of communication among neurons in the brain .the hodgkin huxley equations predict , correctly , the shape of the voltage spikes and their propagation velocity .hodgkin and huxley had the good fortune that the dynamics of the squid giant axon is dominated by two kinds of ion channels : one sodium channel and one potassium channel .in contrast ,our genome encodes roughly one hundred ion channels more , if you count splicing variants . andthe typical neuron in your head might express ten different channels .there was a period which was very productive , during which many groups showed that the vision of hodgkin and huxley was correct , even if most neurons are more complicated than the squid axon : if you take your favorite neuron , you can reduce its electrical dynamics to a description in terms of several different kinds of channels .in favorable cases you can even measure , independently , the current flowing through single ion channels , watching the channels open and close , and showing that the kinetics of these transitions is consistent with the form of the equations that hodgkin and huxley wrote down .the industry of building ( generalized ) hodgkin huxley models for neurons hummed along for decades , resting on a foundation of detailed , quantitative experiments .there is , however , a problem , first emphasized by larry abbott and his colleagues : if we are going to use many different kinds of ion channels to describe the dynamics of a single neuron , how many of each kind should we use ? while it is possible to measure the kinetics of the individual channels , it s much harder to count directly how many of channels of each type are present in the membrane of a single cell .so , you typically have to extract these numbers by fitting to data on the electrical dynamics itself .somebody gives you the ingredients and then you have to dial the knobs to reproduce the behavior of the neuron , and the more realistic the description , with more different kinds of channels , the harder this problem becomes .the key insight was to step away from the problem of describing a particular neuron and ask : in the space of all the possible neurons i could build with this many kinds of ion channels , what can i get ?an example of the range of possibilities that a cell could access by changing the numbers of just two ion channels is shown in fig [ lfa ] .this example is drawn from a detailed model for a particular neuron in the stomatogastric ganglion of the crab , which has been studied extensively and thus provides a good test case , but there is no reason to think that what we are seeing is specific to this neuron . by varying the copy numbers of just two types of channel, we can produce cells that are silent , cells that fire single , isolated action spikes like the ticks of a clock , cells that generate bursts with two or three spikes per burst , and more .along one direction we can see transitions through three qualitatively distinct behaviors when the number of copies of one channel is changed by just 10 - 20% .it does nt take much imagination to think that these quantitative changes in tempo of activity matter to the organism .this means that our problem in fitting models can be identified with the cell s problem in controlling it s own behavior : how does a cell manage to sit in the middle of one functional region , and not wander off into other regions ?what abbott and colleagues proposed was that cells set the number of channels by monitoring what they are doing .so , for example , a cell could monitor it s internal calcium concentration .when the voltage across the membrane changes , as during an action potential , calcium channels open and close , calcium flows in , and this provides a monitor of electrical activity .the calcium concentration is known to feed into many biochemical pathways inside the cell , and we can imagine that some of these could regulate either the expression of the channels or their insertion into the membrane. mechanisms of this type allow cells to stabilize the very different behaviors seen in fig [ lfa ] , essentially because the map of calcium concentration vs channel copy numbers neatly overlays the map of spiking rhythms .one can do even more subtle things by having multiple calcium sensors with different timescales , and much of what we are saying here about the nature of the mapping between channel copy numbers and functional dynamics in single neurons can be generalized to thinking about small networks .these ideas were quickly confirmed .perhaps the most dramatic experiment involves taking a neuron , ripping it out of the network and putting it in a dish in which the external ionic concentrations are completely bizarre . as a result ,when one channel opens the current might flow in the wrong direction , and of course the cell goes completely wild .but if you come back a day later , the cell is back doing its normal thing .it knows what it s trying to do , if one can speak anthropomorphically .this is a beautiful subject , still under rapid development .this example points to a very important transition , which we might think of as a transition between models and theories .hodgkin and huxley proposed a model , and for 30 + years , the goal was to fit that model to the behavior of particular neurons .it was only in the early 90s that abbott and company suggested that we look beyond the particular , and take the generalized hodgkin huxley equations seriously as a theory of what neurons might do .these equations allow the construction of cells that belong to a large class , and within that class there are cells that do nt exist in nature .thus , it s not a model of anything in particular , it s a theory for a class of things that can happen , and within that theory there are questions such as how one should set the ( many ) parameter values . stated this way ,the question is internal to the theory , but then we can jump to suggest that this is a problem that neurons themselves actually need to solve . happily , following this path leads to immediate , and successful , predictions for new experiments .when we make models for the dynamics of a biological system , there are many parameters . in some cases these parametersare encoded in the genome , and change only on evolutionary time scales , while in other cases the parameters are subject to control on physiological time scales , as with ion channel copy numbers .the more realistic our models , the more parameters we have , and considerable mathematical ingenuity has been deployed in estimating these parameters from experimental data . but this whole picture is unsettling for a theoretical physicist .our most complete theories of the natural world certainly have parameters , but there is a sense that if we are focused too much on these parameters then we are doing something wrong .if parameters proliferate , we take this as a sign that we are missing some additional level of unification that could relate these many parameters to one another ; if our qualitative explanation of phenomena hinges on precise quantitative adjustment of parameters , then we search for the hidden dynamics that could make this apparent fine tuning happen more naturally .some of the greatest triumphs of modern theoretical physics are nearly free from parameters the bcs theory of superconductivity , the renormalization group theory of critical phenomena , the theory of the fractional quantum hall effect , and more .importantly , these examples refer not a rarefied world of interactions among small numbers of elementary particles , but rather to the properties of real , macroscopic materials , with all their chemical complexities .how can we reconcile the parameter aversion of theoretical physicists with the explosion of parameters that arise in a realistic approach to biological systems ?much of what our community is doing , i think , can be understood as a reaction to this problem .there are several approaches .first , it might be that the parameters are just a distraction , and that the meaningful functional behaviors of biological systems emerge as generic or `` robust '' properties of our models , independent of precise parameter settings . a second, approximately opposite view is that the forces of evolution have been strong enough to select non generic parameter values , allowing for phenomena that emerge only through fine tuning ; if we can identify the selection principle , we then have a theory for at least an idealization of the real biological systems , again without reference to parameters . finally , we might hope that parameter independence emerges in biological systems much as it does for inanimate materials , with something like the renormalization group telling us that macroscopic behaviors which matter for the organism can be independent of ( highly parameterized ) microscopic details . in these three sections ( [ tuning][collective ] ) , i ll look at these three ideas in turn . the problem discussed in the previous section is exactly the problem of balancing robustness against fine tuning ; interestingly , the picture proposed by abbott and colleagues essentially splits the difference between these very different ideas . within a description of ion channel dynamics alone ,what we see in real neurons is manifestly the result of fine tuning : you have to get combinations of ion channel copy numbers right , as shown in fig [ lfa ] .but the mechanism by which cells achieve this tuning is to promote the finely tuned parameters to being dynamical variables , and then this larger dynamical system can be attracted to a functional fixed point , generically .it also is important that the mapping from parameters to function is complex , and many to one . in the conventional language of neuroscience , the parameters generalized hodgkin huxley models are measured as `` maximal conductances '' for each type of channel , that is the conductance if all the channels of a particular type are in their open state .this is the product of a single channel conductance and the number of channels , so i have referred to this as a problem of ion channel copy numbers ; since ion channels are proteins , this problem is about how the functional dynamics of a network of interacting proteins depends on the number of copies of each protein .this problem re emerged some years later in thinking about biochemical and genetic networks , where it grew into a separate literature . in biochemical and genetic networks , there has been considerable emphasis on the need for `` robustness '' against protein copy number variations .this idea resonated in the community in part because of a shared , if implicit , hypothesis that precise control over protein copy numbers is not possible .while some systems might indeed be robust , i think we now now that precise control of protein copy number is , in fact , possible when needed .the ion channel example shows that copy number fluctuations can be large , but the functionally important combinations of copy numbers can be tuned through feedback . in contrast , the example of maternal morphogens in the fruit fly embryo shows that cells can generate reproducible copy numbers even without feedback , so that absolute concentrations can carry biologically meaningful signals . in bacterial chemotaxis , which provided some of the motivation for the robustness idea , more recent experiments show that the operon structure of gene regulation in bacteria serves to reduce relative fluctuations in the copy numbers of crucial proteins , and that if this structure is removed then cells that exhibit larger relative fluctuations are at a competitive disadvantage .it thus seems likely that , in all these systems , we are seeing tuning or selection of parameters to achieve functional outputs , and that the search for networks which can achieve functionality with random parameter choices may be missing something essential .from the level of interacting networks of proteins we can drop down to ask about genericity vs. fine tuning in single protein molecules .we recall that proteins are polymers of amino acids , with lengths from a few tens to many hundreds of residues , and with twenty types of amino acids the number of possible proteins is ( beyond ) astronomical .the sequence of amino acids in most cases determines the structure , and hence the function , of the protein . on the one hand , we know that these molecules are not finely tuned : not every single detail of the amino acid sequence matters . on the other hand, a random sequence typically does nt even fold into a unique compact structure random heteropolymers are glassy let alone carry out interesting functions .so where along the continuum between every detail being important to being completely generic do real proteins sit ?we will return to this problem below . instead of moving down to the level of single protein molecules, we can look for examples of this same question by moving `` up '' to the level of neural networks .in particular , let s think about the problem of building a short term memory for a continuous variable .if i want a network that generates the pattern of activity needed to hold my arm fixed at each of several different heights , then i need a dynamical system that has a fixed point at each of these locally stable positions .but if i want to hold stable at a continuous range of positions , i need a line of fixed points , and that s completely non generic .one fixed point is okay , and multiple isolated fixed points are fine , as in the hopfield model , but a whole continuum of fixed points that s not generic , you have to tune parameters .better than the problem of holding your arm at a fixed height ( which involves feedback from mechanical sensors ) is the problem of holding your eyes still . with your eyesopen you have visual feedback , but even if you close your eyes and turn your head you still counter rotate your eyes to compensate for the movement . the signal coming from your ears ( more precisely , from the semicircular canals )is a motion signal , but in order to keep your eyes counter rotated you need a position signal . so you integrate and hold onto the result after the inputs have disappeared .we can do this for times on the order of a minute , whereas individual neurons usually forget their inputs over perhaps tens of milliseconds .so you have a gap to span , across several orders of magnitude in time .notice that there are two theoretical ideas here .first , we should think about something as ( seemingly ) simple as holding our eyes still in terms of networks with a line of fixed points .second , in the space of possible networks of neurons , such behavior is not generic , so one needs an explanation of how it can occur .a natural answer to the problem of stabilizing non generic behavior is that since this is a brain , it can learn .in fact , the brain has constant access to a feedback signal : if you fail to get things right , then the world keeps slipping on your retina .so the brain should be able to exploit this signal and tune the relevant network , somehow , to achieve this very non generic dynamics .if this picture of tuning via feedback is correct , and we disrupt the feedback , we should be able to `` un tune '' the system. there s a beautiful experiment by david tank and colleagues showing that this is true .the essence of the experiment , as schematized in fig [ dwt1 ] , is to build a planetarium for goldfish , a seemingly low tech experiment that gets right at the central question .this setup monitors the motion of the eyes of the goldfish and rotates the world in proportion , thus changing the coupling between the eyes rotation and the world s rotation .and so if the brain is tuning the networks that stabilize eye movements using visual motion signals , placing the fish in this apparatus will cause the system to mistune . after learning in this unusual environment ,the network wo nt hold a constant eye position , but will be either unstable , with the eye being driven off to eccentric positions , or leaky , with eye positions relaxing back to the middle .this is exactly what is seen experimentally ( fig [ dwt2 ] , ) , and one can even trace these changes in stability down to the dynamics of individual neurons in the network .this shows that you actually have active tuning mechanisms which stabilize this very non generic behavior of the underlying dynamical system .the mechanisms of life can achieve extraordinary precision : our visual system can count single photons , we can hear sounds that cause our eardrum to vibrate by the diameter of an atom , bacteria swim along the gradients of attractive chemicals with a reliability so high that they must be counting every molecule that arrives at their surfaces .i have been interested for some time in whether these are isolated examples , or whether biological systems more generally operate near the limits of what is allowed by the laws of physics .if ( near ) optimality is the rule , then we can promote this to a principle from which essential aspects of the underlying mechanisms can be derived , quantitatively . in some cases the resulting theoretical structureis naturally phrased in terms of optimizing the reliability of decisions , or the accuracy of estimates , while in other cases it seems more compelling to use the slightly more abstract framework of information theory . in either case ,i think it s crucial that we not adopt sweeping hypotheses of optimality for aesthetic reasons , but try to focus on examples where the approach to optimality can be tested , directly , through quantitative experiments . like us , flies use their visual systems to help guide their movements .but , flying at meters per second , they are under pressure to make very quick decisions , and looking out through the tiny lenses of the compound eye , the raw data they have to work with has rather low resolution ; this combination of physical constraints means that even optimal visual estimates of how they are moving through the environment may not be so reliable .estimates of motion , in particular , are encoded by sequences of action potentials from a relatively small number of neurons deep in the fly s brain ; wide field , or rigid body motions are the responsibility of some rather large neurons , and even thirty years ago it was possible to make very long , stable recordings from these cells . at the same time, one can calibrate the signal and noise properties of the photoreceptors , showing that these are nearly ideal photon counters , albeit with finite time resolution , up to counting rates of .rob de ruyter van steveninck and i worked together to show that the motion sensitive neurons encode estimates of visual motion with a precision within a factor of two of the limits set by receptor cell noise and diffraction blur .the observation that the fly can make motion estimates with a precision close to the physical limits suggests that a theory of optimal estimation might be a theory of the computations actually done by the fly s brain .we have developed this theory , and found signatures of the predicted behavior in the responses of the motion sensitive neurons , but it must be admitted that the jury is still out . we have used similar arguments to derive the filtering characteristics of the first synapse in the retina , optimizing the detectability of single photon signals ; this may have been the first example of using optimization arguments to generate successful parameter free predictions of neural responses .subsequent work has explored the role of nonlinearities in separating single photon signals from noise at this synapse , and there have been efforts to use optimization arguments to understand aspects of visual motion perception in humans . the case of visual motion estimation in flies is receiving renewed attention , in part because of opportunities to combine genetic and structural tools to dissect the layers of circuitry that lead from the receptor cells to the larger motion sensitive neurons . organisms must respond to changing concentrations of molecules in their environment , and many internal signals are encoded by such concentration changes . as first emphasized by berg and purcell in the context of bacterial chemotaxis , there is a physical limit to the precision of such signaling because the relevant molecules arrive randomly at their targets , creating a form of shot noise .my colleagues and i have tried to make the intuitive arguments of berg and purcell more rigorous , with the goal of defining limits to signaling in a broader range of biological processes , and this problem has now been addressed in several different ways .we have worked with our experimental colleagues to show that the limits are reached , or at least approached , in the early events of embryonic development in the fruit fly , as the network of gap genes responds to spatially varying concentrations of the primary maternal morphogens .a more abstract notion of optimal performance concerns the efficiency of information transmission and representation , an idea that reaches back to discussions of neural coding , perception , and learning by barlow and attneave in the 1950s .the ability of neurons to convey information is limited by the statistical properties of the action potential sequences that they generate , and by the time resolution with which the brain can meaningfully ` read ' these sequences .we have worked with experimental collaborators to show that real neurons transmit information about dynamic sensory inputs at rates within a factor of two of the physical limit set by the entropy of the spike sequences , down to time resolutions on the order of milliseconds , and that this efficiency is even higher for inputs that capture some of the statistical features of the relevant natural signals .as first emphasized by laughlin , such efficiency requires a matching of neural coding strategies to the statistical structure of sensory inputs .laughlin considered the response of large monopolar cells ( lmcs ) in the fly retina to changing image intensity or contrast .these cells give a graded voltage output , and in the limit that voltage noise is small and independent of the mean , the optimal input / output relation is one that generates a uniform distribution of outputs ; this means that the normalized input / output relation is the cumulative distribution of inputs .laughlin built a photodetector with optics matched to that of the fly s eye and sampled this distribution in the natural environment .he then compared the predictions with the voltage responses of the lmcs , as in fig [ matching]a .these results inspired explorations of this `` matching '' principle in different contexts . since natural signalsare intermittent , optimizing information transmission requires that sensory neurons adjust their input / output relations in real time as the dynamic range of inputs varies .these effects were demonstrated in the vertebrate retina and in the motion sensitive neurons of the fly visual system . in particular , if we consider a family of input distributions that differ only in the overall dynamic range of inputs ( e.g. , gaussian signals with different variances ) , then when the noise is small the only parameter that can set the scale of inputs is the dynamic range itself .hence optimal input / output relations in these different environments should be rescaled versions of one another , as seen in fig [ matching]b ; one can even show that the proportionality constant between the gain of the input / output relation and the dynamic range of input is the one that maximizes information transmission .related adaptation effects have now been seen in a wide range of systems , at levels from the sensory periphery to deep in the cortex , and there are even hints that the speed of adaptation itself approaches the limits set by the need to collect reliable statistics on the input distribution .limits to information transmission are set by a combination of the available dynamic range and the noise levels in the signaling pathway .these limits are especially clear when the signals are carried by changes in the concentration of signaling molecules ; an important example is transcriptional regulation , which we can think of as the transmission of information from the concentration of transcription factors to the expression levels of the target gene(s ) . in the limit that noise levels are small , but state dependent ,optimizing information transmission leads to a distribution of outputs inversely proportional to the standard deviation of the noise ; more generally , if we have a characterization of the noise level along the input / output relation , we can find the optimal distribution of outputs numerically . in the _ drosophila _ embryo, the expression level of hunchback responds to the spatially varying concentration of the primary maternal morphogen bicoid , and from the measured noise levels we can compute the optimal distribution of expression levels , which is in surprisingly good agreement with experiment ( fig [ matching]c ) .hunchback is one of several gap genes , and together the expression levels of these genes are thought to provide information about the position of cells ( ) along the anterior posterior axis of the embryo .because the distribution of positions is uniform , matching requires that the errors in estimating position ( ) also be uniform .if we look at just one gene , this is far from the case , but as we add in the contributions from all the gap genes , with their complicated spatial patterns of mean expression and ( co)variance , we see the emergence of a nearly uniform positional error , as shown in fig [ matching]d .importantly , the scale of this positional error ( ) is essentially equal to the precision of subsequent decisions about the body plan . before leaving fig [ matching ] ,let me emphasize that in each case we are using the same theoretical principle : maximize information transmission by matching the distribution of inputs to the input / output relation and noise levels .there are differences of detail , but these arise because the noise levels in the different systems are different .crucially , this theoretical approach generates parameter free predictions ; thus , none of the results in fig [ matching ] involve fitting .further , in each case we can not only test predictions based on optimizing information transmission , we can also estimate the amount of information being transmitted and show that it is very close to the optimum ., this point is made in the cited papers . for the case of the lmcs ,see ref and 3.1 of ref . ] beyond matching , we have been searching for the architecture and parameters of genetic networks that optimize information transmission , and we have been able to formulate this optimization problem in a way such that the solutions depend only on the number of available molecules . as a function of this resource constraint , we find transitions from architectures that are highly redundant , with multiple target genes responding identically to transcription factor inputs , to architectures where the multiple targets are activated or repressed at staggered thresholds , tiling the dynamic range of inputs .redundancy can be reduced , and efficiency increased , by mutual repression among target genes , feedback loops can generate long integration times to help average out noise , and in spatially extended systems such as a developing embryo the proper amount of spatial averaging can play a similar noise reducing role ; finally , the cell can enhance information transmission at the lowest transcription factor concentrations by having these molecules act also as translational regulators of constitutively expressed mrnas .all of these theoretical results have qualitative correlates in the properties of real genetic control networks , notably the gap gene network in the developing fly embryo , although it remains a challenge to put these different results together into a fully quantitative theory of real networks .in all the examples above , successful application of information theoretic ideas depends on identifying what information is relevant to the system we are studying .one can imagine a nightmare scenario in which the very principled notion of optimizing information transmission is submerged under long arguments about natural history , and our hopes for theory in the physics sense are dashed .can we do something more general ? we have argued that , in many cases , information is relevant to the extent that it has predictive power ; predictive information captures our intuition about the complexity or richness of time series , and the efficient representation of predictive information unifies the description of signal processing and learning . in collaboration with mj berry ii and his colleagues , we have now measured the predictive information carried by neurons in the vertebrate retina .every ganglion cell participates in a small group for which the encoded predictive information is close to the limit set by the statistical structure of the inputs themselves .groups of cells carry information about the future state of their own activity , and this information can be extracted by downstream neurons that exhibit familiar forms of visual feature selectivity .the efficient representation of predictive information is a new candidate principle that can be applied at every stage of neural computation .from the spectacular aerial displays of flocking birds down to the beautiful choreography of cell movements in a developing embryo , many of life s most striking phenomena emerge from interactions among hundreds if not thousands or even millions of components .the enormous success of statistical physics in describing emergent phenomena in equilibrium systems has led many people to hope that it could provide a useful language for describing emergence in biological systems as well . in the past decade or so, my colleagues and i have been excited by the use of maximum entropy methods to build statistical physics models for variety of biological systems that are grounded in real data . in a small window of time, a single neuron either generates an action potential or remains silent , and thus the states of a network of neurons are described naturally by binary vectors .we have tried to approximate the probability distribution of these binary vectors by maximum entropy distributions that are consistent with the mean spike probability for each cell , and with the matrix of pairwise correlations among cells .these models are ising models , and since correlations have both signs , the interactions among `` spins '' in the model have both signs they are a sort of spin glass , not unlike the model that hopfield wrote down in 1982 . with mj berry ii and his colleagues , who have developed methods for recording simultaneously from almost all the neurons in a small patch of the vertebrate retina as it responds to naturalistic visual inputs , we found that models based on pairwise correlations provided strikingly precise descriptions of the entire distribution of neural activity in groups of ten to fifteen cells .by now we can write very accurate probability distributions for the joint activity of 160 cells in the vertebrate retina .although there are many details , the overall structure of these models is consistent with extrapolations from the analysis of smaller groups of cells , and aspects of this structure can be seen in much simpler models .we have preliminary evidence that the same maximum entropy strategy can describe activity in populations of neurons in the hippocampus . around the time we were getting our first results on maximum entropy models for neurons , i heard rama ranganathan talk about his group s efforts to explore the space of amino acid sequences. in outline , they looked at a family of proteins that were known to have similar structures and functions , and developed an algorithm to generate a new ensemble of sequences that were consistent with the observed pairwise correlations among amino acid substitutions at different sites along the chain .they then synthesized some of the molecules in this artificial family , and found that a substantial fraction of these molecules were functional ; in contrast , proteins synthesized by choosing amino acids independently at each site were not functional .we were able to show that what ranganathan and colleagues were doing was , in a certain limit , equivalent to the pairwise maximum entropy construction that we were doing for neurons . in the maximum entropy construction ,correlations between substitutions at different sites are generated by effective interactions , and from other statistical mechanics problems we expect that the spatial range of correlations will be larger than the spatial extent of interactions .indeed , one can find correlations among amino acid substitutions that are widely separated , not only along the polymer chain but also in three dimensional space , but our intuition is that interactions should be local .if this is borne out , then the statistics of pairwise correlations among amino acids substitutions encodes information about which sites along the one dimensional sequence are neighbors in three dimensional space , and we would be able to predict protein structures from sequence data alone . there is tantalizing evidence from weigt , colwell , and others that this actually works .these models also provide an explicit answer to the question raised in [ tuning ] about the location of amino acid sequences along the continuum from fine tuning to randomness .perhaps the prototypical example of emergent , collective behavior in a biological system is a flock of birds .there were important early theoretical efforts to develop a statistical mechanics of flocking and swarming , and these ideas developed into a whole field of `` active matter '' , but i think it is fair to say that , well past the year 2000 , most of the experimental observations were qualitative .the situation changed dramatically with the work of cavagna , giardina , and their colleagues in rome , who developed methods to track the trajectories of every bird in groups of more than one thousand starlings as they engaged in aerial displays .we have worked together to build maximum entropy models for the joint distribution of velocities for all the birds in the flock , matching the average correlation of birds with their near neighbors , as well as mean and variance of the speeds .again , these extremely simple models are strikingly accurate , as shown in fig [ birds ] , correctly predicting the pattern of correlations throughout the entire flock , including the small but significant four bird correlations , as well as the long ranged correlations in the fluctuations of both flight direction and flight speed . again ,fig [ birds ] is not a collection of fits ; the model is determined by matching three local expectation values , one of which simply sets the units of speed , and everything else that we calculate is a parameter free prediction . in particular , we are not free to make adjustments in an attempt to capture the long ranged correlations ; either these are predicted correctly , or they are not .these models are mathematically equivalent to equilibrium statistical mechanics models with local interactions , and in such systems long ranged correlations can arise only by two mechanisms : goldstone s theorem , and tuning to a critical point . indeed , the flock spontaneously breaks a continuous symmetry by choosing an overall flight direction , and the long ranged correlations in the directional fluctuations are mediated by the resulting goldstone modes .but there is no corresponding argument for the speed fluctuations , and in this case long ranged correlations must be a signature of criticality , as one can verify by detailed analysis of the model in ref .the rome group has gone on to analyze the trajectories of swarming midges , and here too they see long ranged correlations of velocity fluctuations , now in the absence of symmetry breaking , and argue that this again is a sign of criticality . for neurons , the notion of locality of interactions is not so useful , because neurons are extended objects and can reach many , many neighbors . as a result , long ranged correlations are not a useful diagnostic of criticality . as an alternative we have tried to develop a thermodynamics for neural networks , essentially counting the number of states ( combinations of spiking and silence across the population ) that have a particular value of log probability ; this is equivalent to measuring entropy vs energy .strikingly , for the activity of neurons in the retina , the entropy is essentially a linear function of the energy , with unit slope , which corresponds to an unusual kind of critical point .there is an independent literature that tries to connect the dynamical patterns of activity in neural systems with the scale invariant `` avalanches '' predicted by self organized criticality .another dynamical notion of criticality is to ask about the number of lyapunov exponents near zero , and there is an elegantly simple model that shows how a network could learn to be critical in this sense .subsequent work from magnasco and colleagues has looked at the data emerging from human electro corticography ; they estimate the spectra of lyapunov exponents for models that describe these dynamical signals , show that there is a concentration of exponents near zero , and even that this critical behavior is lost as the patient slips out of consciousness under anesthesia .the relationship between statistical and dynamical notions of criticality is not at all clear , and this is a physics problem not a biology problem ; for a first try at connecting the different ideas in the context of neural data , see ref .returning to the families of proteins , we again see hints of critical behavior .the hope is that the distribution of sequences can be described by models in which the different choices of amino acid interact only when the residues are in contact , but we also know that measured correlations extend over long distances , which is why the attempt to infer contacts from correlations is hard .if this picture really is correct , we have the coexistence of local interactions and long ranged correlations , which is a signature of criticality .but the situation is far from clear , since the data are still sparse , and correlations derived from functionality are mixed with correlations derived from shared evolutionary history .we have tried a test case the diversity of antibodies in the zebrafish immune system that involves much shorter sequences , where the relevant protein family can be exhaustively sampled , and hence where the maximum entropy construction can be carried , convincingly , to completion . even in this more limited problem, we see signs that the distribution of sequences is poised near a critical point in parameter space .i hope to have convinced you that our modern understanding of the phenomena of life has already been influenced , dramatically , by theory , and that the prospects for the future are bright .this is , perhaps , a moment to emphasize that the examples i have chosen are far from exhaustive . in the same spirit, i could have discussed many other beautiful developments : the idea that reliable transmission of information through the synthesis of new molecules as in the replication , transcription , and translation of sequence information coded in dna depends on building maxwell demons ( kinetic proofreading ) that can push past the limits to precision set by thermodynamics ; the idea that amino acid sequences of real proteins are selected to avoid the frustration that leads to the glassiness of random heteropolymers ; the idea that the pace of evolutionary change is determined not by the typical organism , but by those rare organisms in the tail of the fitness distribution , as well as broader connections of evolutionary dynamics to statistical physics ; the idea that the active mechanics of the inner ear are tuned near a critical point ( hopf bifurcation ) , maximizing sensitivity and frequency selectivity while providing a natural and nearly parameter free explanation for the essential nonlinearities of auditory perception ; and more . despite these many examples, there is a persistent notion that biology has developed without significant theoretical input .this is reinforced by what amounts to revisionist history in the teaching of biology .if biology is presented to undergraduate students as the science they can do even if they do nt like math , then when it comes time to teach them about the foundations of molecular and cellular neuroscience , one simply can not write down the hodgkin huxley equations and expect the students to understand what is going on . similarly , now that we can sequence dna , it is conventional to suppress the fact that the linear arrangement of genes along chromosomes was established by mathematical analysis , long before we even knew the identity of dna as the molecule that carries genetic information .even when it comes to experimental methods , few modern biology curricula teach the theory of x ray diffraction from a helix , and thus students do not learn the mathematics behind the interpretation of rosalind franklin s famous observations on dna .the message , i think , is that mathematical analysis not to speak of theory is merely technical .even with the proliferation of graduate programs in quantitative biology , so long as this anti mathematical approach constitutes the mainstream of biology teaching , we can not expect that the biology community itself will create a genuinely receptive audience for theory .if the community insists that what is `` biologically relevant '' must always be translated into words , then the search for mathematical description can never be central to the practice of biology . in a dissent from cheerful interdisciplinarity ,i believe it is essential that the physics community provide a home for the theoretical physics of biological systems . discussions of the relation between physics and biology , and especially of the relation between theoretical physics and biology , often include various warnings about theorists isolating themselves from experiment , running off to do things which are irrelevant .i believe that these concerns are wildly overstated .my colleagues and i , who are trying to do theory at the interface of physics and biology , spend quite a lot of our time interacting with experiments , and with experimentalists . indeed ,one of the traditional roles of theory in physics is to highlight things that would be interesting to measure , and this happens as we try to theorize about biological systems as well .although it often is claimed that biology is awash in data , in fact the attempt to build theories often points to numbers that we do nt know , numbers that can determine which of several theoretical directions is most productive . sometimes measuring these quantities that are most relevant for theory drives the development of new experimental methods , or new data analysis strategies , and these have implications well beyond the original theoretical ideas . ) .the theoretical work involved both understanding the physical limits to reliability ( see above ) and developing conditions under which decoding could be simple even when encoding was complicated .while i still think the results on the precision of computation are very important , the idea of decoding itself had a much larger impact , and even had implications for practical matters such as neural prostheses .] this means , in particular , that theories can be enormously productive even if they are wrong , or not faithful to all the details of the real systems we are thinking about .if you are worried about a disconnect between theory and experiment , i think that there is a much greater danger of people doing experiments and collecting data that will never fit into any mathematical framework .this seems especially likely at a moment when you can collect exponentially more data than you could before .i would remind you that in other data intensive , phenomenological areas astrophysics and cosmology , for example when you go off to spend $100 million to collect data , there are theorists on the team for the design of the instruments and observations .you think about what you re looking for and what framework you re planning on analyzing it with _ before _ you collect the data , not after .the attentiveness of theorists to experiment also raises the worry that we will lose sight of our more grand ambitions .it certainly is true that we live in an era where data is expanding exponentially , and this is a good thing . and we as theorists are the richer for it .but theory is more than data mining .the point here is that miners know gold when they see it .what you do when you are data mining is to look for certain kinds of structure ; within the set of possible structures you identify the one which is best supported by the data , and then pin down the parameters within this best structure .but the possible structures are , in a very real sense , your theories about what might be going on .if your list of structures is not rich enough and deep enough , if your list of possible theories does nt include the right one , you re not going to understand what s going on , and no amount of data is going to solve this problem .finally , i believe that the deepest theoretical questions transcend the boundaries between the subfields of biology .i hope that this is clear from the examples that i have given .i am excited to see the same theoretical questions being formulated in different biological contexts , in some cases really using the same mathematics to describe these very different systems .one of the ways in which this has happened is by focusing on problems that the organism itself has to solve , from digging weak signals out of a noisy background to setting the parameters of its own networks .even if the answers are different , it is attractive to think of mechanisms in different systems , even at different levels of organization , as being chosen by nature to solve the same physics problems that the organism faces in different contexts .similarly , in the discussion of collective behavior , we have seen the same conceptual principles organizing our thinking about problems ranging from the evolution of protein families to the dynamics of flocks and swarms , not just at an abstract level but also engaging with details of the data .importantly , we see all these commonalities only through theory , and thus theory has the chance of redrawing the intellectual landscape of the field .in looking more carefully through the references , i realized that the spirit of what i want to convey here was expressed long ago , albeit in a different context : `` what are one s overall impressions of the present state of the subject ?two things strike me particularly .first , the existence of general ideas covering wide aspects of the problem .it is remarkable that one can formulate principles ... which explain many striking facts and yet for which proof is completely lacking .this gap between theory and experiment is a great stimulus to the imagination .second , the extremely active state of the subject experimentally ... new and significant results are being reported every few months , and there seems to be no sign of work coming to a standstill because experimental techniques are inadequate . ''what crick was saying about the interplay between theory and experiment in the exploration of the genetic code , now nearly sixty years ago , is something that applies today to our exploration of life much more broadly . thanks to the simons foundation , and to many colleagues involved in the 2014 workshop , for the opportunity to sharpen the ideas expressed here .thanks also to lee morgan for transcribing the lecture .my own work on these problems has been in collaboration with many others , as can be seen from the reference list , who have made these explorations a pleasure .we have been supported in part by the national science foundation , most recently through grants phy1305525 , phy1451171 , and ccf0939370 , by the simons foundation , and by the swartz foundation .special thanks to cg callan and mo magnasco , for many long conversations about what it is we all are trying to do . 99 w bialek , _ biophysics : searching for principles _ ( princeton university press , princeton , 2012 ). lord rayleigh , xii . on our perception of sound direction ._ phil mag series 6 _ * 13 , * 214232 ( 1907 ). jd watson and fhc crick , a structure for deoxyribose nucleic acid . _nature _ * 171 , * 737739 ( 1953 ) .jd watson and fhc crick , genetical implications of the structure of deoxyribonucleic acid ._ nature _ * 171 , * 964967 ( 1953 ) .fhc crick , on protein synthesis ._ symp soc exp biol _* 12 , * 138163 ( 1958 ) .hf judson , _ the eighth day of creation_. ( simon and schuster , new york , 1979 ) .jd watson , _ the double helix : a personal account of the discovery of the structure of dna_. norton critical edition , g stent , ed ( norton , new york , 1980 ) .b maddox , _ rosalind franklin : the dark lady of dna_. ( harper collins , 2002 ) .re franklin and rg gosling , molecular configuration in sodium thymonucleate ._ nature _ * 171 , * 740741 ( 1953 ) .mhf wilkins , ar stokes , and hr wilson , molecular structure of deoxypentose nucleic acids ._ nature _ * 171 , * 738740 ( 1953 ) .re franklin and rg gosling , evidence for 2chain helix in crystalline structure of sodium deoxyribonucleate ._ nature _ * 172 , * 156157 ( 1953 ) .re franklin and rg gosling , the structure of sodium thymonucleate fibres .i. the influence of water content ._ acta cryst _ * 6 , * 673677 ( 1953 ) .re franklin and rg gosling , the structure of sodium thymonucleate fibres .ii . the cylindrically symmetrical patterson function ._ acta cryst _ * 6 , * 678685 ( 1953 ) .r langridge , hr wilson , cw hooper , mhf wilkins , and ld hamilton , the molecular configuration of deoxyribonucleic acid . i. x ray diffraction study of a crystalline form of the lithium salt ._ j mol biol _ * 2 , * 1937 ( 1960 ) .r langridge , da marvin , we seeds , hr wilson , cw hooper , and mhf wilkins , the molecular configuration of deoxyribonucleic acid .ii . molecular models and their fourier transforms . _ j mol biol _ * 2 , * 3864 ( 1960 ) .w cochran , fhc crick , and v vand , the structure of synthetic polypeptides .i. the transform of atoms on a helix ._ acta cryst _ * 5 , * 581586 ( 1952 ) .pw anderson , more is different . _science _ * 177 , * 393396 ( 1972 ) .al hodgkin and af huxley , a quantitative description of membrane current and its application to conduction and excitation in nerve ._ j physiol ( lond ) _ * 117 , * 500544 ( 1952 ) .f rieke , d warland , r de ruyter van steveninck , and w bialek , _ spikes : exploring the neural code . _( mit press , cambridge , 1997 ) .g lemasson , e marder , and lf abbott , activity dependent regulation of conductances in model neurons . _ science _ * 259 , * 19151917 ( 1993 ) .lf abbott and g lemasson , analysis of neuron modelswith dynamically regulated conductances ._ neural comp _ * 5 , * 823842 ( 1993 ) . ms goldman , j golowasch , e marder , and lf abbott , global structure , robustness , and modulation of neuronal models ._ j neurosci _ * 21 , * 52295238 ( 2001 ) . z liu , j golowasch , e marder , and lf abbott , a model neuron with activity dependent conductances regulated by multiple calcium sensors ._ j neurosci _ * 18 , * 23092320 ( 1998 ). aa prinz , d bucher , and e marder , similar network activity from disparate circuit parameters ._ nat neurosci _ * 7 , * 13451352 ( 2004 ) .g turrigiano , lf abbott , and e marder , activity dependent changes in the intrinsic properties of cultured neurons ._ science _ * 264 , * 974977 ( 1994 ) .rn cahn , the eighteen parameters of the standard model in your everyday life ._ rev mod phys _ * 68 , * 952959 ( 1996 ) .j bardeen , ln cooper , and jr schrieffer , theory of superconductivity . _phys rev _ * 108 , * 11751204 ( 1957 ) .kg wilson , the renormalization group : critical phenomena and the kondo problem ._ rev mod phys _ * 47 , * 773840 ( 1975 ) .rb laughlin , anomalous quantum hall effect : an incompressible quantum fluid with fractionally charged excitations ._ phys rev lett _ * 50 , * 13951398 ( 1983 ) .dj gross and f wilczek , asymptotically free gauge theories .phys rev d _ * 9 , * 980993 ( 1974 ) .f wilczek , problems of strong p and t invariance in the presence of instantons ._ phys rev lett _ * 40 , * 279282 ( 1977 ) .n barkai and s leibler , robustness in simple biochemical networks . _ nature _ * 387 , * 913917 ( 1997 ) .g von dassow , e meir , em munro , and gm odell , the segment polarity network is a robust developmental module ._ nature _ * 406 , * 188192 ( 2000 ) .t gregor , dw tank , ef wieschaus , and w bialek , probing the limits to positional information ._ cell _ * 130 , * 153164 ( 2007 ) .md petkova , sc little , f liu , and t gregor , maternal origins of developmental reproducibility ._ curr biol _ * 24 , * 12831288 ( 2014 ) .m kollmann , l lvdok , k bartholome , j timmer , and v sourjik design principles of a bacterial signalling network ._ nature _ * 438 , * 504507 ( 2005 ) .l lvdok , k bentele , n vladimirov , a mller , fs pop , d lebiedz , m kollmann , and v sourjik , role of translational coupling in robustness of bacterial chemotaxis pathway . _ plos biology _ * 7 , * e1000171 ( 2009 ) .jj hopfield , neural networks and physical systems with emergent collective computational abilities . _ proc natl acad sci usa _ * 79 , * 25542558 ( 1982 ) .hs seung , how the brain keeps the eyes still ._ proc natl acad sci ( usa ) _ * 93 , * 1333913334 ( 1996 ) .g major , r baker , e aksay , b mensh , hs seung , and dw tank , plasticity and tuning by visual feedback of the stability of a neural integrator ._ proc natl acad sci ( usa ) _ * 101 , * 77397744 ( 2004 ) .g major , r baker , e aksay , hs seung , and dw tank , plasticity and tuning of the time course of analog persistent firing in a neural integrator ._ proc natl acad sci ( usa ) _ * 101 , * 77457750 ( 2004 ) . rr de ruyter van steveninck , wh zaagman , and hak mastebroek , adaptation of transient responses of a movement sensitive neuron in the visual system of the blowfly_ calliphora erythrocephala_. _ biol cybern _ * 54 , * 223236 ( 1986 ) .rr de ruyter van steveninck and sb laughlin , light adaptation and reliability in blowfly photoreceptors .r de ruyter van steveninck and sb laughlin , _ int j neural syst _ * 7 , * 437444 ( 1996 ) . rr de ruyter van steveninck and sb laughlin , the rate of information transfer at graded potential synapses ._ nature _ * 379 , * 642645 ( 1996 ) .w bialek , f rieke , rr de ruyter van steveninck , and d warland , reading a neural code ._ science _ * 252 , * 18541857 ( 1991 ) .r de ruyter van steveninck and w bialek , reliability and statistical efficiency of a blowfly movement sensitive neuron . _phil trans r. soc lond ._ * 348 , * 321340 ( 1995 ) .hc berg and em purcell , physics of chemoreception ._ biophys j _ * 20 , * 193219 ( 1977 ) .w bialek and s setayeshgar , physical limits to biochemical signaling ._ proc natl acad sci ( usa ) _ * 102 , * 1004010045 ( 2005 ) .w bialek and s setayeshgar , cooperativity , sensitivity and noise in biochemical signaling ._ phys rev lett _ * 100 , * 258101 ( 2008 ) .g tkaik and w bialek , diffusion , dimensionality and noise in transcriptional regulation ._ phys rev e _ * 79 , * 051901 ( 2009 ) .rg endres and ns wingreen , maximum likelihood and the single receptor ._ phys rev lett _ * 103 , * 158101 ( 2009 ) .t mora and ns wingreen , limits of sensing temporal concentration changes by single cells ._ phys rev lett _ * 104 , * 248101 ( 2010 ) .cc govern and pr ten wolde , fundamental limits on sensing chemical concentrations with linear biochemical networks ._ phys rev lett _ * 109 , * 218103 ( 2012 ) .k kaizu , wh de ronde , j paijmans , k takahashi , f tostevin , and pr ten wolde , the berg purcell limit revisited ._ biophys j _ * 106 , * 976985 ( 2014 ) .j paijmans and pr ten wolde , lower bound on the precision of transcriptional regulation and why facilitated diffusion can reduce noise in gene expression ._ phys rev e _ * 90,*032708 ( 2014 ) .g tkaik , t gregor , and w bialek , the role of input noise in transcriptional regulation ._ plos one _ * 3 , * e2774 ( 2008 ) .m potters and w bialek , statistical mechanics and visual signal processing ._ j phys i france _ * 4 * , 17551775 ( 1994 ) . rr de ruyter van steveninck , w bialek , m potters , and rh carlson , statistical adaptation and optimal estimation in movement computation by the blowfly visual system . inproc ieee conf sys man cybern _, 302307 ( 1994 ) .w bialek and r de ruyter van steveninck , features and dimensions : motion estimation in fly vision .arxiv : q bio/0505003 ( 2005 ) .w bialek and wg owen , temporal filtering in retinal bipolar cells : elements of an optimal computation? _ biophys j _ * 58 , * 12271233 ( 1990 ) .f rieke , wg owen , and w bialek , optimal filtering in the salamander retina . in _ advances in neural information processing 3 , _r lippman , j moody & d touretzky , eds , pp 377383 ( morgan kaufmann , san mateo ca , 1991 ) .gd field and f rieke , nonlinear signal transfer from mouse rods to bipolar cells and implications for visual sensitivity ._ neuron _ * 34 , * 773785 ( 2002 ) .y weiss , ep simoncelli , and eh adelson , motion illusions as optimal percepts ._ nat neurosci _ * 5 , * 598604 ( 2002 ) .aa stocker and ep simoncelli , noise characteristics and prior expectations in human visual speed perception ._ nat neurosci _ * 9 , * 578585 ( 2006 ) .je fitzgerald , ay katsov , tr clandinin , and mj schnitzer , symmetries in stimulus statistics shape the form of visual motion estimators ._ proc natl acad sci ( usa ) _ * 108 , * 1290912914 ( 2011 ) .da clark , je fitzgerald , jm ales , dm gohl , ma silies , am norcia , and tr clandinin , flies and humans share a motion estimation strategy that exploits natural scene statistics ._ nat neurosci _ * 17 , * 296303 ( 2014 ) .je fitzgerald and da clark , nonlinear circuits for naturalistic visual motion estimation ._ elife _ * 4 , * e09123 ( 2015 ) .s takemura , et al , a visual motion detection circuit suggested by _drosophila _ connectomics ._ nature _ * 500 , * 175181 ( 2013 ) .ye fisher , jcs leong , k sporar , md ketkar , dm gohl , tr clandinin , and m silies , a class of visual neurons with wide field properties is required for local motion detection ._ curr biol _ * 25 , * 31783189 ( 2015 ) .f attneave , some informational aspects of visual perception ._ psych rev _ * 61 , * 183193 ( 1954 ) .hb barlow , sensory mechanisms , the reduction of redundancy , and intelligence . in _ proceedings of the symposium on the mechanization of thought processes , volume 2 _ , dv blake andam utlley , eds , pp 537574 ( hm stationery office , london , 1959 ) .hb barlow , possible principles underlying the transformation of sensory messages . in _ sensory communication _ , w rosenblith , ed , pp 217234 ( mit press , cambridge , 1961 ) .d mackay and ws mcculloch , the limiting information capacity of a neuronal link ._ bull math biophys _ * 14 , * 127135 ( 1952 ). f rieke , d warland , and w bialek , coding efficiency and information rates in sensory neurons .. lett _ * 22 , * 151156 ( 1993 ) .sp strong , r koberle , rr de ruyter van steveninck , and w bialek , entropy and information in neural spike trains ._ phys rev lett _ * 80 , * 197200 ( 1998 ) .sp strong , rr de ruyter van steveninck , w bialek , and r koberle , on the application of information theory to neural spike trains . in _pacific symposium on biocomputing 98 _ , rb altman , ak dunker , l hunter , and te klein , eds , pp 621632 ( world scientific , singapore , 1998 ) .f rieke , da bodnar , and w bialek , naturalistic stimuli increase the rate and efficiency of information transmission by primary auditory neurons . _ proc r soc lond ser . b _ * 262 , * 259265 ( 1995 ) .gd lewen , w bialek , and rr de ruyter van steveninck , neural coding of naturalistic motion stimuli ._ network _ * 12 , * 317329 ( 2001 ). bd wright , k sen , w bialek , and aj doupe , spike timing and the coding of naturalistic sounds in a central auditory area of songbirds . in _ advances in neural information processing 14 , _ tg dietterich , s becker , and z ghahramani , eds , pp 309316 ( mit press , cambridge , 2002 ) .gd lewen , w bialek , and rr de ruyter van steveninck , neural coding of a natural stimulus ensemble : information at sub millisecond resolution .i nemenman , _plos comput biol _ * 4 , * e1000025 ( 2008 ) .sb laughlin , a simple coding procedure enhances a neuron s information capacity ._ z naturforsch _* 36c , * 910912 ( 1981 ) .dl ruderman and w bialek , statistics of natural images : scaling in the woods ._ phys rev lett _ * 73 * , 814817 ( 1994 ) .s smirnakis , mj berry ii , dk warland , w bialek , and m meister , adaptation of retinal processing to image contrast and spatial scale ._ nature _ * 386 , * 6973 ( 1997 ) .n brenner , w bialek , and r de ruyter van steveninck , adaptive rescaling optimizes information transmission ._ neuron _ * 26 , * 695702 ( 2000 ) . mn kvale and ce schreiner , short - term adaptation of auditory receptive fields to dynamic stimuli ._ j neurophysiol _ * 91 , * 604612 ( 2004 ) .i dean , ns harper , and d mcalpine , neural population coding of sound level adapts to stimulus statistics ._ nature neurosci _ * 8 , * 16841689 ( 2005 ) .ki nagel and aj doupe , temporal processing and adaptation in the songbird auditory forebrain ._ neuron _ * 21 , * 845859 ( 2006 ) .m maravall , rs petersen , al fairhall , e arabzadeh , and me diamond , shifts in coding properties and maintenance of information transmission during adaptation in barrel cortex . _ plos biology _ * 5 , * e19 ( 2007 ) .w de baene , e premereur , and r vogels , properties of shape tuning of macaque inferior temporal neurons examined using rapid serial visual presentation ._ j neurophysiol _ * 97 , * 29002916 ( 2007 ) .b wen , gi wang , i dean , and b delgutte , dynamic range adaptation to sound level statistics in the auditory nerve . _ j neurosci _ * 29 , * 1379713808 ( 2009 ) .jc rahmen , p keating , fr nodal , al schulz , and aj king , adaptation to stimulus statistics in the perception and neural representation of auditory space ._ neuron _ * 66 , * 937948 ( 2010 ) .nc rabinowitz , bdb willmore , jwh schnup , and aj king , contrast gain control in auditory cortex ._ neuron _ * 70 , * 11781192 ( 2011 ) .al fairhall , gd lewen , w bialek , and rr de ruyter van steveninck , efficiency and ambiguity in an adaptive neural code ._ nature _ * 412 , * 787792 ( 2001 ) .b wark , a fairhall , and f rieke , timescales of inference in visual adaptation ._ neuron _ * 61 , * 750761 ( 2009 ) .g tkaik , cg callan jr , and w bialek , information capacity of genetic regulatory elements ._ phys rev e _ * 78 , * 011910 ( 2008 ) .g tkaik , cg callan jr , and w bialek , information flow and optimization in transcriptional regulation ._ proc natl acad sci ( usa ) _ * 105 , * 1226512270 ( 2008 ) .jo dubuis , g tkaik , ef wieschaus , t gregor , and w bialek , positional information , in bits ._ proc natl acad sci ( usa ) _ * 110 , * 1630116308 ( 2013 ) .g tkaik , am walczak , and w bialek , optimizing information flow in small genetic networks ._ phys rev e _ * 80 , * 031920 ( 2009 ) .am walczak , g tkaik , and w bialek , optimizing information flow in small genetic networks .ii : feed forward interaction . _ phys rev e _ * 81 , * 041905 ( 2010 ) .g tkaik , am walczak , and w bialek , optimizing information flow in small genetic networks .iii . a self interacting gene . _ phys rev e _ * 85 , * 041903 ( 2012 ) .tr sokolowski and g tkaik , optimizing information flow in small genetic networks .iv . spatial coupling. _ phys rev e _ * 91 , * 062710 ( 2015 ) .tr sokolowski , am walczak , w bialek , and g tkaik , extending the dynamic range of transcription factor action by translational regulation .arxiv.org:1507.02562 [ qbio.mn ] ( 2015 ) .w bialek , i nemenman , and n tishby , predictability , complexity and learning ._ neural comp _ * 13 , * 24092463 ( 2001 ). w bialek , rr de ruyter van steveninck , and n tishby , efficient representation as a design principle for neural coding and computation .arxiv:0712.4381 [ qbio.nc ] ( 2007 ) .a preliminary account appears in the _ proceedings of the international symposium on information theory 2006_. se palmer , o marre , mj berry ii , and w bialek , predictive information in a sensory population ._ proc natl acad sci ( usa ) _ * 112 , * 69086913 ( 2015 ) .r segev , j goodhouse , j puchalla , and mj berry ii , recording spikes from a large fraction of the ganglion cells in a retinal patch ._ nature neurosci _ * 7 , * 11551162 ( 2004 ) .o marre , d amodei , k sadeghi , f soo , te holy , and mj berry ii , recording from a complete population in the retina ._ j neurosci _ * 32 , * 1485914873 ( 2012 ) .e schneidman , mj berry ii , r segev , and w bialek , weak pairwise correlations imply strongly correlated network states in a neural population ._ nature _ * 440 , * 10071012 ( 2006 ) .g tkaik , o marre , d amodei , e schneidman , w bialek , and mj berry ii , searching for collective behavior in a large network of sensory neurons ._ plos comput biol _ * 10 , * e1003408 ( 2014 ) .g tkaik , e schneidman , mj berry ii , and w bialek , ising models for networks of real neurons .arxiv : q bio.nc/0611072 ( 2006 ) .g tkaik , e schneidman , mj berry ii , and w bialek , spin glass models for networks of real neurons .arxiv:0912.5409 [ qbio.nc ] ( 2009 ) .g tkaik , o marre , d amodei , mj berry ii , and w bialek , the simplest maximum entropy model for collective behavior in a neural network . _ j stat mech _ p03011 ( 2013 ) .l meshulam , j gauthier , dw tank , and w bialek , interpreting collective neural activity underlying spatial navigation in virtual reality . _ bulletin of the aps _ * 60(1 ) , * g50.9 ( 2015 ) .m socolich , sw lockless , wp russ , h lee , kh gardner , and r ranganathan , evolutionary information for specifying a protein fold ._ nature _ * 437 , * 512518 ( 2005 ) .wp russ , dm lowery , p mishra , mb yaffe , and r ranganathan , natural like function in artificial ww domains. _ nature _ * 437 , * 579583 ( 2005 ) .w bialek and r ranganathan , rediscovering the power of pairwise interactions .arxiv.org:0712.4397 [ qbio.qm ] ( 2007 ) . as lapedes , bg giraud , lc liu , and gd stormo , a maximum entropy formalism for disentangling chains of correlated sequence positions. in _ proceedings of the ims / ams international conference on statistics in molecular biology and genetics_ pp 236256 ( 1998 ) .bg giraud , jm heumann , and as lapedes , superadditive correlation. _ phys rev e _ * 59 , * 49834991 ( 1999 ) .a lapedes , b giraud , and c jarzynski , using sequence alignments to predict protein structure and stability with high accuracy ._ los alamos national laboratory report _ur024481 ( 2002 ) . later deposited at arxiv.org:1207.2484 [ qbio.qm ] ( 2012 ) .m weigt , ra white , h szurmant , ja hoch , and t hwa , identification of direct residue contacts in protein protein interaction by message passing ._ proc natl acad sci ( usa ) _ * 106 , * 6772 ( 2009 ) .ds marks , lj colwell , r sheridan , ta hopf , a pagnani , r zecchina , and c sander , protein 3d structure computed from evolutionary sequence variation ._ plos one _ * 6 , * e28766 ( 2011 ) .ji sulkowska , f morcos , m weigt , t hwa , and jn onuchic , genomics aided structure prediction . _ proc natl acad sci ( usa ) _ * 109 , * 1034010345 ( 2012 ) . j toner and y tu , long range order in a two dimensional xy model : how birds fly together ._ phys rev lett _ * 75 , * 43264329 ( 1995 ) .t vicsek , a czirk , e ben jacob , i cohen , and o shochet , novel type of phase transition in a system of self driven particles ._ phys rev lett _ * 75 , * 12261229 ( 1995 ) .j toner and y tu , flocks , herds , and schools : a quantitative theory of flocking ._ phys rev e _ * 58 , * 48284858 ( 1998 ) . s ramaswamy , the mechanics and statistics of active matter . _ annu rev cond matt phys _ * 1 , * 323345 ( 2010 ) .m ballerini , n cabibbo , r candelier , a cavagna , e cisbani , i giardina , a orlandi , g parisi , a procaccini , m viale , and v zdravkovic , empirical investigation of starling flocks : a benchmark study in collective animal behaviour ._ animal behaviour _ * 76 , * 201215 ( 2008 ) . a cavagna, i giardina , a orlandi , g parisi , a procaccini , m viale , and v zdravkovic , the starflag handbook on collective animal behaviour : 1 . empirical methods ._ animal behaviour _ * 76 , * 217236 ( 2008 ) . a cavagna, i giardina , a orlandi , g parisi , and a procaccini , the starflag handbook on collective animal behaviour : 2 .three dimensional analysis ._ animal behaviour _ * 76 , * 237248 ( 2008 ) .m ballerini , n cabibbo , r candelier , a cavagna , e cisbani , i giardina , v lecomte , a orlandi , g parisi , a procaccini , m viale , and v zdravkovic , interaction ruling animal collective behavior depends on topological rather than metric distance : evidence from a field study . _proc natl acad sci ( usa ) _ * 105 , * 12321237 ( 2008 ) .w bialek , a cavagna , i giardina , t mora , e silvestri , m viale , and a walczak , statistical mechanics for natural flocks of birds ._ proc natl acad sci ( usa ) _ * 109 , * 47864791 ( 2012 ) .w bialek , a cavagna , i giardina , t mora , o pohl , e silvestri , m viale , and am walczak , social interactions dominate speed control in poising natural flocks near criticality . _ proc natl acad sci ( usa ) _ * 111 , * 72127217 ( 2014 ) .a cavagna , a cimarelli , i giardina , g parisi , r santigati , f stefanin , and m viale , scale free correlations in starling flocks ._ proc natl acad sci ( usa ) _ * 107 , * 1186511870 ( 2010 ) .a cavagna , l del castillo , s dey , i giardina , s melillo , l parisi , and m viale , short range interaction vs long range correlation in bird flocks ._ phys rev e _ * 92 , * 012705 ( 2015 ) .a cavagna , i giardina , f ginelli , t mora , d piovani , r tavarone , and am walczak , dynamical maximum entropy approach to flocking . _ phys rev e _ * 89 , * 042707 ( 2014 ) .t mora , am walczak , l del castello , f ginelli , s melillo , l parisi , m viale , a cavagna , and i giardina , questioning the activity of active matter in natural flocks of birds .arxiv:1511.01958 [ qbio.pe ] ( 2015 ) .a attanasi , a cavagna , l del castello , i giardina , s melillo , l parisi , o pohl , b rossaro , e shen , e silvestri , and m viale , collective behavior without collective order in wild swarms of midges ._ plos comput biol _ * 10 , * e1003697 ( 2014 ) . a attanasi , a cavagna , l del castello, i giardina , s melillo , l parisi , o pohl , b rossaro , e shen , e silvestri , and m viale , finite size scaling as a way to probe near criticality in natural swarms ._ phys rev lett _ * 113 , * 238102 ( 2014 ) .t mora and w bialek , are biological systems poised at criticality ? _ j stat phys _ * 144 , * 268302 ( 2011 ) .gj stephens , t mora , g tkaik , and w bialek , statistical thermodynamics of natural images ._ phys rev lett _ * 110 , * 018701 ( 2013 ) .g tkaik , t mora , o marre , d amodei , se palmer , mj berry ii , and w bialek , thermodynamics for a network of neurons : signatures of criticality ._ proc natl acad sci ( usa ) _ * 112 , * 1150811513 ( 2015 ) .jm beggs and d plenz , neuronal avalanches in neocortical circuits ._ j neurosci _ * 23 , * 167177 ( 2003 ) .jm beggs and d plenz , neuronal avalanches are diverse and precise patterns of activity that are stable for many hours in cortical slice cultures . _j neurosci _ * 24 , * 52165229 ( 2004 ) .n friedman , s ito , b brinkman , m shimono , rl deville , k dahmen , jm beggs , and tc butler , universal critical dynamics in high resolution neuronal avalanche data ._ phys rev lett _ * 108 , * 208102 ( 2012 ) .mo magnasco , o piro , and ga cecchi , self tuned critical anti hebbian networks ._ phys rev lett _ * 102 , * 258102 ( 2009 ) .g solovey , kj miller , jg ojemann , mo magnasco , and ga cecchi , self regulated dynamical criticality in human ecog ._ front integ neurosci _ * 6 , * 44 ( 2012 ) .g solovey , lm alonso , t yanagawa , n fuji , mo magnasco , ga cecchi , and a proekt , loss of consciousness is associated with stabilization of cortical activity . _j neurosci _ * 35 , * 1086610877 ( 2015 ) .t mora , s deny , and o marre , dynamical criticality in the collective activity of a population of retinal neurons ._ phys rev lett _ * 115 , * 078105 ( 2015 ) .sa kauffman , metabolic stability and epigenesis in randomly constructed genetic nets . _j theor biol _ * 22 , * 437467 ( 1969 ) .b derrida and h flyvbjerg , multivalley structure in kauffman s model : analogy with spin glasses . _j phys a _ * 19 , * l10031008 ( 1986 ) . b derrida and y pomeau , random networks of automata : a simple annealed approximation ._ europhys lett _ * 1 , * 4549 ( 1986 ). e mjolsness , dh sharp , and j reinitz , a connectionist model of development ._ j theor biol _ * 152 , * 429453 ( 1995 ) .l bintu , ne buchler , hg garcia , u gerland , t hwa , j kondev , and r phillips , transcriptional regulation by the numbers : models . _ curr opin genet dev _ * 15 , * 116124 ( 2005 ) .l bintu , ne buchler , hg garcia , u gerland , t hwa , j kondev , t kuhlman , and r phillips , transcriptional regulation by the numbers : applications . _ curr opin genet dev _ * 15 , * 125135 ( 2005 ) .i shmulevich , s kauffman , and m aldana , eukaryotic cells are dynamically ordered or critical but not chaotic . _ proc natl acad sci ( usa ) _ * 102 , * 1343913444 ( 2005 ) .m nykter , nd price , m aldana , sa ramsey , sa kauffman , le hood , o yli harja , and i shmulevich , gene expression dynamics in the macrophage exhibit criticality . _proc natl acad sci ( usa ) _ * 105 , * 18971900 ( 2008 ) .e balleza , er alvarez buylla , a chaos , s kauffman , i shmulevich , and m aldana , critical dynamics in genetic regulatory networks : examples from four kingdoms . _ plos one _ * 3 , * e 2456 ( 2008 ) .d krotov , jo dubuis , t gregor , and w bialek , morphogenesis at criticality ? _ proc natl acad sci ( usa ) _ * 111 , * 36833688 ( 2014 ) .e lubeck and l cai , single cell systems biology by super resolution imaging and combinatorial labeling ._ nature meth _ * 9 , * 743748 ( 2012 ). kh chen , an boettiger , jr moffitt , s wang , and x zhaung , spatially resolved , highly multiplexed rna profiling in single cells . _science _ * 348 , * aaa6090 ( 2015 ) .ja weinstein , n jiang , ra white , ds fisher , and sr quake , high throughput sequencing of the zebrafish antibody repertoire ._ science _ * 324 , * 807810 ( 2009 ) .t mora , am walczak , w bialek , and cg callan jr , maximum entropy models for antibody diversity. _ proc natl acad sci ( usa ) _ * 107 , * 54055410 ( 2010 ) .jj hopfield , kinetic proofreading : a new mechanism for reducing errors in biosynthetic processes requiring high specificity . _ proc natl acad sci ( usa ) _ * 71 , * 41354139 ( 1974 ) .j ninio , kinetic amplification of enzyme discrimination ._ biochimie _ * 57 , * 587595 ( 1975 ) .jj hopfield , the energy relay : a proofreading scheme based on dynamic cooperativity and lacking all characteristic symptoms of kinetic proofreading in dna replication and protein synthesis . _ proc natl acad sci ( usa ) _ * 77 , * 52485252 ( 1980 ) .jd bryngelson and pg wolynes , spin glasses and the statistical mechanics of protein folding . _ proc natl acad sci ( usa ) _ * 84 , * 75247528 ( 1987 ) .pe leopold , m montal , and jn onuchic , protein folding funnels : a kinetic approach to the sequence structure relationship ._ proc natl acad sci ( usa ) _ * 89 , * 87218725 ( 1992 ) .jn onuchic , pg wolynes , z luthey schulten , and nd socci , toward an outline of the topography of a realistic protein folding funnel ._ proc natl acad sci ( usa ) _ * 92 , * 36263630 ( 1995 ) .mm desai , ds fisher , and aw murray , the speed of evolution and maintenance of variation in asexual populations ._ curr biol _ * 17 , * 385394 ( 2007 ) .ra neher , bi shraiman , and ds fisher , rate of adaptation in large sexual populations ._ genetics _ * 184 , * 467481 ( 2010 ) .o hallatschek , the noisy edge of traveling waves ._ proc natl acad sci ( usa ) _ * 108 , * 17831787 ( 2011 ) .ra neher and bi shraiman , statistical genetics and evolution of quantitative traits . _rev mod phys _ * 83 , * 12831300 ( 2011 ) .ds fisher , asexual evolution waves : fluctuations and universality . arxiv:1210.6295v1 [ qbio.pe ] ( 2012 ) .vm eguluz , m ospeck , y choe , aj hudspeth , and mo magnasco , essential nonlinearities in hearing ._ phys rev lett _ * 84 , * 52325235 ( 2000 ) .s calamet , t duke , f jlicher , and j prost , auditory sensitivity provided by self tuned critical oscillations of hair cells ._ proc natl acad sci ( usa ) _ * 97 , * 31833187 ( 2000 ) .mo magnasco , a traveling wave over a hopf bifurcation shapes the cochlear tuning curve ._ phys rev lett _ * 90 , * 058101 ( 2003 ) .mh vos , mr jones , cn hunter , j breton , jc lambry , and jl martin , coherent nuclear dynamics at room temperature in bacterial reaction centers . _ proc natl acad sci ( usa ) _ * 91 , * 1270112705 ( 1994 ) .gs engel , tr calhoun , el read , t k ahn , t manal , y c cheng , re blankenship , and gr fleming , evidence for wavelike energy transfer through quantum coherence in photosynthetic systems ._ nature _ * 446 , * 782786 ( 2007 ) .ah sturtevant , the linear arrangement of six sex linked factors in _ drosophila _ , as shown by their mode of association ._ j exp zool _ * 14 , * 4359 ( 1913 ) .
theoretical physics is the search for simple and universal mathematical descriptions of the natural world . in contrast , much of modern biology is an exploration of the complexity and diversity of life . for many , this contrast is prima facie evidence that theory , in the sense that physicists use the word , is impossible in a biological context . for others , this contrast serves to highlight a grand challenge . i m an optimist , and believe ( along with many colleagues ) that the time is ripe for the emergence of a more unified theoretical physics of biological systems , building on successes in thinking about particular phenomena . in this essay i try to explain the reasons for my optimism , through a combination of historical and modern examples .
erasure codes can be used in data storage systems that encode and disperse information to multiple storage nodes in the network ( or multiple disks inside a large data center ) , such that a user can retrieve it by accessing only a subset of them .this kind of systems is able to provide superior availability and durability in the event of disk corruption or network congestion , at a fraction of the cost of the current state of art storage systems based on simple data replication .when data is coded by an erasure code , data repair becomes more involved , because the information stored at a given node may not be directly available from any one of the remaining storage nodes .one key issue that affects the overall system performance is the total amount of information that the remaining nodes needs to transmit to the new node .dimakis _ et al . _ proposed the framework of regenerating codes to address the tradeoff between the storage and repair bandwidth in erasure - coded distributed storage systems . in this framework , the overall system consists of storage nodes situated in different network locations , each with units of data , and the content is coded in such a way that by accessing any of these storage nodes , the full data content of units can be completely recovered .when a node fails , a new node may access any remaining nodes for units of data each , in order to regenerate a new data node .the main result in is for the so - called functional - repair case , where the regenerating process does not need to exactly replicate the original data stored on the failed node , but only needs to guarantee that the regenerated node can serve the same purpose as the lost node , _i.e. _ , data reconstruction using any nodes , and being able to help regenerate new data nodes to replace subsequently failed nodes .it was shown that this problem can be cleverly converted to a network multicast problem , and the celebrated result on network coding can be applied directly to provide a complete characterization of the optimal bandwidth - storage tradeoff .furthermore , linear network codes are sufficient to achieve this optimal performance . the decoding and repair rules for functional - repair regenerating codes may evolve as nodes are repaired , which increases the overhead of the system .moreover , functional - repair does not guarantee systematic format storage , which is an important requirement in practice .for these reasons , exact - repair regenerating codes have received considerable attention recently , where the regenerated data need to be exactly the same as that stored in the failed node . the optimal bandwith - storage tradeoff for the functional - repair case can clearly serve as an outer bound for the exact - repair case .there also exist code constructions for the two extreme cases , _i.e. _ , the minimum storage regenerating ( msr ) point , or the minimum bandwidth regenerating ( mbr ) point , and the aforementioned outer bound is in fact achievable at these two extreme points .the achievability of these two extreme points immediately implies that for the cases , the functional - repair outer bound is tight for the exact repair case . also relevantis the fact that symbol extensions are necessary for linear codes to achieve the msr point for some parameter range , however the msr point can indeed be asymptotically ( in ) achieved by linear codes for all the parameter range .it was also shown in that when , other than the two extreme points and a segment close to the msr point , the majority the functional repair outer bound is in fact not strictly achievable by exact - repair regenerating codes .the non - achievability result reported in was proved by contradiction , _i.e. _ , a contradiction will occur if one supposes that an exact - repair code operates _ strictly _ on the optimal functional - repair tradeoff curve .however , it is not clear whether this contradiction is caused by the functional - repair outer bound being only asymptotically achievable , or caused by the existence of a non - vanishing gap between the optimal tradeoff of exact - repair codes and the functional - repair outer bound .in fact , the necessity of symbol extension proved in and the asymptotically optimal construction given in may be interpreted as suggesting that the former is true . in this work ,we focus on the simplest case of exact - repair regenerating codes , _i.e. _ , when , for which the rate region has not been completely characterized previously . a complete characterization of the rate region is provided for this case , which shows that indeed there exists a non - vanishing gap between the optimal tradeoff of the exact - repair codes and that of the functional - repair codes .the achievability part of this result shows that there exist exact - repair regenerating codes that are better than simply time - sharing between the msr point and the mbr point . as in many open information theoretical problems ,the difficulty lies in finding good outer bounds , particularly in this problem with a large number of regenerating and reconstruction requirements .we rely on a computer - aided proof ( cap ) approach and take advantage of the symmetry and other problem - specific structure to reduce the number of variables in the optimization problem .this approach builds upon yeungs linear programming ( lp ) framework . as of our knowledge , this is the first time that the lp framework is meaningfully applied to a non - trivial engineering problem , which leads to a complete solution .more importantly , instead of only machine - proving whether an information theoretic bound is true or not as in , we further develop a secondary optimization procedure to find an _ explicit information theoretic proof_. by solving the primary lp optimization problem , the tradeoff curve between the storage and repair bandwidth can be traced out numerically , which leads to the hypotheses of the bounding planes for the rate region .a secondary optimization procedure , which essentially solves the dual problem for these candidate bounding planes , directly yields an explicit information theoretic proof . due to the duality structure in the lp problem ,the optimization criterion in the secondary optimization problem can be selected arbitrarily , thus we can choose one that leads to the solution that we most desire . for this purpose , the norm is chosen that approximates the solution under the norm , the latter of which gives the sparsest solution and translates roughly to a converse proof with the least number of steps .the rest of the paper is organized as follows . in section [ sec : definition ] , we provide a formal definition of the problem and review briefly the functional - repair outer bound .the characterization of the rater region is given in section [ sec : main ] , together with the forward and converse proof .section [ sec : cap ] provides details on the computed - aid proof approach , and section [ sec : conclusion ] concludes the paper .in this section we first give a formal definition of the regenerating code problem for the case , and then introduce some notation useful for the converse proof .somewhat surprisingly , we were not able to find such a formal definition in the existing literature , and thus believe it is beneficial to include one here ( which can be generalized to other parameters ) . the functional - repair outer bound is briefly reviewed and specialized to the case under consideration .a exact - repair regenerating code is formally defined as follows , where the notation is used to denote the set , and is used to denote the cardinality of a set .[ def : nkkcode ] an exact - repair regenerating code for the case consists of encoding functions , decoding functions , repair encoding functions , and repair decoding functions , where each of which maps the message to one piece of coded information , each of which maps pieces of coded information stored on a set of nodes to the original message , each of which maps a piece of coded information at node to an index that will be made available to reconstruct the data at node , and each of which maps 3 such indices from the helper nodes to reconstruct the information stored at the failed node .the functions must satisfy the data reconstruction conditions and the repair conditions in the above definition , is the cardinality of the message set , and is essentially .similarly is essentially and is . to include the case when the storage - bandwidth tradeoff may be approached asymptotically , _e.g. _ , the codes considered in , the following definition which utilized a normalized version of and is further introduced .a normalized bandwidth - storage pair is said to be exact - repair achievable if for any there exists an exact - repair regenerating code such that the collection of all the achievable pairs is the achievable region of the exact - repair regenerating codes .the reconstruction condition ( [ eqn : reconstructionzeroerror ] ) requires that there is no decoding error , _i.e. _ , the zero - error requirement is adopted .an alternative definition is to require instead the probability of decoding error to be asymptotically zero as .it will become clear from the rate region characterization given in the sequel that this does not cause any essential difference , and thus we do not give this alternative definition to avoid repetition . in order to derive the outer bound ,it is convenient to write the reconstruction and regenerating conditions in the form of entropy constraints . for this purpose ,some further notation is introduced here , which is largely borrowed from .let us denote the message random variable as , which is uniformly distributed in the set .define thus we have the following random variables in the set the reconstruction requirement thus implies that the regenerating requirement implies that and because the message has a uniform distribution , we also have that which is strictly larger than zero . note that together with ( [ eqn : reconstruction ] ), this implies that the symmetric storage requirement can be written as and the regenerating bandwidth constraint can be written as the above constraints ( [ eqn : reconstruction])-([eqn : beta ] ) are the constraints that need to be satisfied by any exact - repair regenerating codes .these constraints will be used later in the converse proof ..[fig : bound433 ] ] the optimal tradeoff for functional - repair regenerating codes was given by dimakis _ , which provides an outer bound for the exact - repair case .the bound has the following form in our notation for the case ( see fig .[ fig : bound433 ] ) it is not difficult to show that it can be rewritten as the following four simultaneous linear bounds the msr point for this case is , and the mbr point is .the following theorem provides a complete characterization of the rate region of the exact - repair regenerating codes .[ theorem : main ] the rate region of the exact - repair regenerating codes is given by the collection of pairs that satisfy the following constraints this rate region is also depicted in fig .[ fig : bound433 ] , together with the functional - repair outer bound .it is clear that there is a gap between them , and thus the functional - repair outer bound can not be asymptotically achievable under the exact - repair requirement . note that the only difference between the region given in theorem [ theorem : main ] and that in ( [ eqn : frouterbound ] ) is the third bounding plane .the rate region has three corner points , and thus we only need to show that these three points are all achievable .the msr point is simply achieved by any mds code , such as the binary systematic code with a single parity check bit .the mbr point is also easily obtained by using the repair - by - transfer code construction in , which in this case reduces to a simple replication coding .it thus only remains to show that the point is also achievable .next we shall give a construction for a binary code with , and , which indeed achieves this operating point .the code is illustrated in table [ table : code ] , where ( and in the remainder of this section ) the addition is in the binary field .here are the systematic bits , , and the remaining bits are the parity bits .first note that the construction is circularly symmetric , and thus without loss of generality , we only need to consider the case when node 1 fails .if it can be shown that when node 2 , 3 , 4 each contribute two bits , node 1 can be reconstructed , which also implies that the complete data can be recovered using only node 2 , 3 and 4 , then the proof is complete .this can indeed be done using the combination shown in table [ table : repair ] . upon receiving these six bits in table [ table :repair ] , the new node can form the following combinations where the first combination is formed by using the second bit from node 2 and the first bit from node 3 ( shown in bold ) , and the other combinations can be formed similarly . in the binary field , this is equivalent to having and it is seen that can be recovered by simply taking the difference between ( [ eqn : first ] ) and ( [ eqn : second ] ) , and similarly can be recovered by taking the difference between ( [ eqn : first ] ) and ( [ eqn : third ] ) .note further that the third bit stored in node 1 is simply the summation of the first bits contributed from node 2 , 3 , and 4 in table [ table : repair ] .the proof is thus complete .the hand - crafted code presented above is specific for the case .however , in a recent work , sasidharan and kumar discovered a class of codes that is optimal for the case at operating points other than msr or mbr , and specializing it to the case achieves the same performance as the code above ; see also for a closely related code construction . .a code for .[ cols="^,^,^,^",options="header " , ] only a small subset of the joint entropy terms are in the solution of this secondary lp problem , as listed in table [ tab : correspondence ] where we also given them labels to facilitate subsequent discussion . herethe letters are used to denote four distinct indices in the set , because by the symmetry , they may assume any order . we can now tabulate the solution of the secondary lp problem , as given in table . [table : cancellation ] , one row corresponding to one row in , _ i.e. _ , one basic information inequality as shown in table [ table : basicinequalities ] .the last line in table .[ table : cancellation ] is the row summation which is indeed .note that the last inequality of table [ table : basicinequalities ] is also a basic information inequality , but it is not in the form of ( [ eqn : shannontype2 ] ) because some problem specific reduction discussed in the previous sub - section has been incorporated .though this is already a valid proof , we can manually combine several inequalities to simplify the proof , and the converse proof given in the previous section is the result after such further manual simplifications . in , yeung showed that all unconstrained shannon - type inequalities are linear combinations of elemental shannon - type inequalities , _i.e. _ , ( [ eqn : shannontype1 ] ) and ( [ eqn : shannontype2 ] ) . the approach we have discussed above can be viewed as a generalization of this result under additional problem - specific constraints . however , the introduction of the norm objective function to approximately find the sparsest linear combination has not been used previously to investigate information inequalities , and thus it is a novel ingredient .moreover , the proof given in relies on the fact that all joint entropies can be represented by a linear combinations of the elementary forms of shannons information measures , which are the left hand sides of ( [ eqn : shannontype1 ] ) and ( [ eqn : shannontype2 ] ) .since our result is regarding the _ tightest _ bounds that can be obtained using the lp approach , the proof directly follows from the strong duality without relying on the completeness of the elementary forms of shannons information measures .a complete characterization is provided for the rate region of the exact - repair regenerating codes , which shows that the cut - set outer bound is in general not ( even asymptotically ) tight for exact - repair .an explicit binary code construction is provided to show that the given rate region is achievable .one main novelty of the work is that a computer - aided proof approach is developed by extending yeungs linear programming framework , and an explicit information theoretic proof is directly obtained using this approach .we believe customizing the lp approach to other communication problems based on similar reduction techniques can be a rather fruitful path , which appears particularly suitable for research on storage systems , and thus have presented some related details in this work .although sparsity is used approximately as an objective in the secondary lp problem , this sparsity is only with respect to the elementary shannon - type inequalities ( [ eqn : shannontype1])-([eqn : shannontype2 ] ) , and thus including more redundant basic shannon - type inequalities may lead to even sparser solution .this is already evident in the algebraic proof given where manual simplification was taken and some basic inequalities not in ( [ eqn : shannontype1])-([eqn : shannontype2 ] ) were used . including these basic inequalities in the secondary lp will clearly yield a more sparser solution .it should also be noted that sparsity only translates roughly to a small number of proof steps , but does not necessarily lead to a structured proof that can be extended to general parameter settings .the result presented in this work revealed that the cut - set outer bound is in general not tight for exact - repair .though a complete solution for the special case of is given , the rate region characterization problem under general parameters is still open .readers may wonder if the procedure given in section [ sec : cap ] can be used on the general problem , if it fundamentally alters the complexity order of the primal optimization problem .unfortunately , although a few more cases with small values can be tackled this way , the complexity is still too high for larger parameter values , and the list of information inequalities involved in the proof is quite large .in fact , running only the set growth and symmetry determination procedures alone for each variable is of exponential order in the total number of random variables . as an on - going work, we are investigating whether low complexity procedures exist that can further take into account the symmetry .through such a general study , we hope to discover more structure in the converse proof which may lead to the complete solution of general exact - repair regenerating codes . for any that is exact - repair achievable, there exists for any , an exact - repair regenerating code such that let us for now fix a value , and consider an exact - repair regenerating code satisfy the above conditions , which may or may not induce a symmetric entropy vector . let the encoding and decoding functions be denoted as : , , , , as given in definition [ def : nkkcode ] .we shall show that it can be used to construct an code that induces a symmetric distortion vector , which clearly satisfies ( [ eqn : conditiondefinition ] ) , and the proof will be completed by making arbitrarily small .let the distinct permutations of be , and let their inverse function be .the new encoding and decoding functions can be written as because of the symmetry of the new encoding and decoding functions , it is clear by utilizing ( [ wsdefinition ] ) that they indeed induce a symmetric entropy vector , according to definition [ def : symmetricentropy ] .the zero - error decoding and repair requirements are satisfied because the original code is able to accomplish them .the proof is thus complete .the author wishes to thank dr .dahai xu at at&t labs - research for introducing him to the cplex optimization software package .n. b. shah , k. v. rashmi , p. v. kumar and k. ramchandran , distributed storage codes with repair - by - transfer and non - achievability of interior points on the storage - bandwidth tradeoff , , vol .1837 - 1852 , mar .n. b. shah , k. v. rashmi , p. v. kumar and k. ramchandran , interference alignment in regenerating codes for distributed storage : necessity and code constructions , , vol .4 , pp . 2134 - 2158 , apr .k. v. rashmi , n. b. shah , and p. v. kumar , optimal exact - regenerating codes for distributed storage at the msr and mbr points via a product - matrix construction , , vol .57 , no . 8 , pp . 5227 - 5239 , aug .
exact - repair regenerating codes are considered for the case , for which a complete characterization of the rate region is provided . this characterization answers in the affirmative the open question whether there exists a non - vanishing gap between the optimal bandwidth - storage tradeoff of the functional - repair regenerating codes ( _ i.e. , _ the cut - set bound ) and that of the exact - repair regenerating codes . to obtain an explicit information theoretic converse , a computer - aided proof ( cap ) approach based on primal and dual relation is developed . this cap approach extends yeung s linear programming ( lp ) method , which was previously only used on information theoretic problems with a few random variables due to the exponential growth of the number of variables in the corresponding lp problem . the symmetry in the exact - repair regenerating code problem allows an effective reduction of the number of variables , and together with several other problem - specific reductions , the lp problem is reduced to a manageable scale . for the achievability , only one non - trivial corner point of the rate region needs to be addressed in this case , for which an explicit binary code construction is given .
the subtlety of the theory of swimming at low reynolds number has not always been fully appreciated .it is important to have simple examples for which calculations can be performed in detail .the first such example was furnished by taylor in his seminal work on the swimming of an undulating planar sheet immersed in a viscous incompressible fluid .soon after , lighthill studied the swimming of a sphere .he considered a squirming sphere with surface displacements in the spherical surface .his work was extended by blake , who considered the full class of surface displacements .the goal of the theory is to calculate the swimming velocity and the rate of dissipation in the fluid for given time - periodic deformations of the body .the rate of dissipation equals the power necessary to achieve the swimming motion .shapere and wilczek formulated the problem in terms of a gauge field on the space of shapes .they pointed out that the measure of efficiency of a stroke introduced by lighthill and blake is not appropriate . in low reynolds number swimming , unlike in the problem of stokes friction , the power is proportional to the speed , rather than the square of the speed .as a measure of efficiency shapere and wilczek therefore introduced a dimensionless number measuring the ratio of speed and power , rather than the ratio of speed squared and power .the theory of swimming at low reynolds number is based on the stokes equations . in earlier workwe have extended the theory to include the rate of change of fluid momentum , as given by the linearized navier - stokes equations . as an example we studied small - amplitude swimming of a deformable sphere , and found the optimum efficiency for the class of swimming motions for which the first order flow velocity is irrotational .our definition of efficiency was analogous to that of shapere and wilczek .the calculation based on the linearized navier - stokes equations was rather elaborate .it turns out that for irrotational flow the inertial effect vanishes , so that for this class of fluid motions it suffices to use the stokes equations .this allows a simpler formalism and easier calculations . in the following we discuss the theory on the basis of the stokes equations , and in additionderive some new results .the stroke of maximum efficiency involves a significant contribution of high order multipoles .this leads us to consider an additional measure of swimming performance , allowing minimization of the energy consumption at fixed amplitude of stroke .we provide a numerical estimate of speed and power for optimal swimming via potential flow of a typical bacterium .customarily the speed is calculated for given power from stokes drag . in the first part of the article we restrict attention to axisymmetric irrotational flow .the fluid flow velocity can be derived from a scalar potential which satisfies laplace s equations .it is therefore natural to introduce multipoles in analogy to electrostatics . to linear orderthe pressure disturbance vanishes .the swimming speed and the power are bilinear in the surface displacements .the class of potential flows is important because of the connection to inviscid flow theory based on the full set of navier - stokes equations , as relevant for swimming at high reynolds number .subsequently we study more general axisymmetric polar flow .this involves modes with vorticity and a non - vanishing pressure disturbance , and requires the use of an additional set of multipoles .it turns out that the more complicated flow with vorticity leads to a significantly higher maximum efficiency than found for potential flow .again we consider the measure of swimming performance based on energy consumption at fixed amplitude , and provide a numerical estimate for a typical bacterium .we consider a flexible sphere of radius immersed in a viscous incompressible fluid of shear viscosity . at low reynolds number and on a slow time scale the flow velocity and the pressure satisfy the stokes equations the fluid is set in motion by time - dependent distortions of the sphere .we shall study periodic distortions which lead to swimming motion of the sphere .the surface displacement is defined as the vector distance of a point on the displaced surface from the point on the sphere with surface .the fluid velocity is required to satisfy this amounts to a no - slip boundary condition .the instantaneous translational swimming velocity , the rotational swimming velocity , and the flow pattern follow from the condition that no net force or torque is exerted on the fluid .we evaluate these quantities by a perturbation expansion in powers of the displacement . in the first part of the article we restrict attention to motions for which to first order in the displacement the flow is irrotational , so that the flow velocity is the gradient of a scalar potential , we specify the surface displacement by assuming an expression for the first order potential .we assume the flow to be symmetric about the axis , so that in spherical coordinates , defined with respect to the center of the sphere in the rest system , the potential takes the form .the potential tends to zero at infinity , and can be expressed as the poisson integral with a source density localized within the sphere of radius .to first order the pressure remains constant and equal to the ambient pressure .we regard the source density as given , and define the surface displacement from instead of we regard as the expansion parameter . for given source density one can evaluate the first order potential by use of eq .hence one finds the first order flow velocity by use of eq . ( 2.4 ) .since this tends to zero faster than , the force exerted on the fluid and the swimming velocity vanish to first order .the rotational velocity and the torque vanish automatically by symmetry .we consider in particular harmonic time variation at frequency , with source density with suitably chosen functions and .since the no - slip condition is nonlinear , the solution of the flow problem involves harmonics with all integer multiples of .we perform a perturbation expansion in powers of the two - component source density . to second order in the flow velocity and the swimming velocitytake the form both and vary harmonically with frequency , and can be expressed as expanding the no - slip condition eq .( 2.3 ) to second order we find for the flow velocity at the surface hence the swimming velocity can be evaluated as the time - averaged swimming velocity is given by where the overhead bar indicates a time - average over a period .the remainder oscillates at frequency .to second order the rate of dissipation is determined entirely by the first order solution .it may be expressed as a surface integral the rate of dissipation is positive and oscillates in time about a mean value .the mean rate of dissipation equals the power necessary to generate the motion .in explicit calculations we expand the source density and the first order potential in spherical harmonics .we define the solid spherical harmonics as with legendre polynomials in the notation of edmonds . the source density inside the sphere generates a potential proportional to outside the sphere .it is natural to extend the potential and the corresponding velocity field inside the sphere .the first order potential outside the sphere is expanded as with dimensionless multipole coefficients .the corresponding first order potential inside the sphere is given by (\cos\theta),\qquad r < a.\ ] ] this has been constructed such that the potential and its radial derivative are continuous at .the corresponding source density is the first order flow outside the sphere is with component field ,\ ] ] with associated legendre function of the first kind , in the notation of edmonds .we note that for the time - dependent source density of the form eq .( 2.7 ) the multipole coefficients are time - dependent and can be expressed as these generate the first order flow the corresponding displacement is {\mbox{\boldmath }}_l(a,\theta).\ ] ] in the calculation of the mean swimming velocity , as given by eq .( 2.12 ) , we use the identity this shows that the mean swimming velocity is given by a sum of products of adjacent multipole coefficients , .\ ] ] we define the multipole moment vector as the one - dimensional array then can be expressed as with a dimensionless symmetric matrix .the upper left - hand corner of the matrix , truncated at , reads on the cross - diagonals the numbers appear for . in the calculation of the rate of dissipation , as given by eq .( 2.18 ) , we use the identity hence the time - averaged rate of dissipation is given by this can be expressed as with a dimensionless diagonal matrix .the upper left - hand corner of the matrix , truncated at , reads on the diagonal the numbers appear for .the crucial identities ( 3.11 ) and ( 3.16 ) are proved by use of the generating function of the legendre polynomials , or by use of known identities relating the polynomials .the question arises how to maximize the mean swimming velocity for given mean rate of dissipation .this leads to an eigenvalue problem for the set of multipole coefficients , the mathematical discussion is simplified by truncating the matrices at a maximum -value , say .we call the truncated -dimensional matrices and .the truncated matrices correspond to swimmers obeying the constraint that all multipole coefficients for vanish .it is seen from eq .( 3.12 ) that there is a degeneracy in the problem .the sum for the mean velocity consists of a sum of two interlaced chains . in the one chain the -coefficients for even and the -coefficients for odd appear . in the other chainthe -coefficients for odd and the -coefficients for even appear .it is therefore sufficient to consider the first type of chain .eigenvectors of this form with the coefficients for the second chain put equal to zero can be mapped onto eigenvectors for the same eigenvalue with the two chains interchanged .we call eigenvectors of the first type even , and eigenvectors of the second type odd .the degeneracy corresponds to invariance under a shift in time by .there is also a symmetry under time reversal .eigenvalues appear in pairs .the even eigenvector for can be obtained from the even eigenvector for by the replacement of the -coefficients by their opposites , leaving the -coefficients unchanged .for the two conjugate eigenvectors the swimming velocity is equal and opposite for the same rate of dissipation .the first symmetry allows a simplification of the eigenvalue problem by a reduction of the matrix dimension by a factor one half .there is a duplication in the matrices and which can be removed by use of complex notation .thus we introduce the complex multipole moment and correspondingly instead of eq .( 3.13 ) then and can be expressed as with the notation the truncated matrices and read the eigenvalue problem now reads since the matrices and are real and symmetric , the eigenvectors can be chosen to be real .with truncation at the eigenvalue problem eq .( 4.7 ) is identical to that for a linear harmonic chain with masses corresponding to the diagonal elements of the matrix and spring constants corresponding to the off - diagonal elements of the matrix .we can simplify further by renormalizing such that the masses are equal .thus we introduce the modified moments with these moments the rate of dissipation is where with unit matrix , and the swimming velocity is where is symmetric with non - zero elements the coefficients tend to unity for large , so that the eigenvalue problem corresponds to a chain of equal masses coupled by spring constants which become uniform for large .we impose the constraint that the multipole coefficients for vanish . the coefficients for correspond to uniform spherical expansion , which is excluded if we impose volume conservation .we denote the matrices truncated at and with the first two rows and columns deleted as and .these have dimension .the corresponding matrices and have dimension and the matrices and have dimension . the eigenvalue problem eq .( 4.12 ) for the linear chain of equal masses coupled with equal force constants has eigenvalues and corresponding eigenvectors with components where is a normalization factor .the largest eigenvalue occurs for . for this eigenvaluethe components of the eigenvector vary slowly with . in the limit the maximum eigenvalue tends to and the components of the corresponding eigenvector tend to a constant .as characteristic dimension of the sphere we take the diameter . the dimensionless efficiency of translational swimming is defined as the ratio the optimum efficiency is related to the maximum eigenvalue by due to a different normalization of the matrix the eigenvalue is four times that defined earlier .it follows from eq .( 4.13 ) that the optimum efficiency is .it is therefore of interest to consider the relative efficiency as a measure of efficiency in the space of potential flows .here we have used the notation of shapere and wilczek .we denote the eigenvector with largest eigenvalue of the truncated eigenvalue problem eq .( 4.7 ) with matrices and as , with normalization , and define then correspondingly the maximum eigenvalue increases monotonically with , since with increasing the space of possible modes gets larger . in fig .1 we plot for values . in fig .2 we show the components of the eigenvector with largest eigenvalue for . as shown in fig .1 the efficiency increases monotonically with .this suggests that the limit corresponds to the best swimmer .however , it is worthwhile to consider also the dimensionless speed and power separately .it is seen numerically that both quantities increase linearly with at large . when listing values for different we are comparing speed and power for eigenvectors with the same normalization .it makes more sense to compare chains with the same amplitude of motion .it follows from eqs .( 3.6 ) and ( 3.10 ) that for the eigenvector the displacement at describes an ellipse in the plane given by the equation with and given by for multipoles given by the ellipse described by will have vertical semi - axis and horizontal semi - axis , if we take the axis to be horizontal .we find that the vertical semi - axis is larger than the horizontal one , except for and . for multipoles the vertical semi - axis has length , where can be taken to be independent of .we therefore consider the reduced speed and power at fixed vertical amplitude of stroke , in fig .3 we plot the reduced speed as a function of , and in fig .4 we plot the reduced power as a function of . remarkably , the reduced power at fixed amplitude shows a minimum at , given by .an animalcule for which the amplitude of motion is given by its structure , and for which the relative amplitude of stroke is fixed , say at , swims with least power for displacement determined by the set of multipoles with . at reduced amplitude is , and the reduced speed is . for the set of multipoles the mean speed and rate of dissipation are in low reynolds numberswimming the speed is proportional to the power .it is incorrect to estimate the required power on the basis of stokes law , which corresponds to pulling of the sphere through the fluid . in the case of pullingthe power is proportional to the square of the speed . for a bacterium of radius in water of shear viscosity in si units ,the power for is watt .the corresponding speed is m / sec .the frequency is estimated as sec .this is to be compared with the viscous time scale sec .the power is calculated from eq .( 5.9 ) as watt and speed m / sec .the efficiency is , compared with the maximum possible for potential flow .the metabolic rate of birds has been measured as 20.000 watt / m , of which one quarter is estimated to be available for mechanical work . accepting the same rate for bacteria ,we have watt , and hence find relative amplitude and speed m / sec .therefore the bacterium moves several diameters per second , in reasonable agreement with experimental data .the specific energy consumption , defined as the power divided by the product of speed and weight , is about five orders of magnitude larger than that of a boeing 747 .we note that dusenbery estimates the available power as only 3 watt / m , instead of 5000 watt / m . in our calculationthis low power level would lead to a much too small speed .it is of interest to study some features of the swimming motion in more detail .as we have shown above , the mean speed and mean power to second order in the displacement are given by bilinear expressions derived from the first order flow pattern .for a chosen characteristic amplitude the latter can be optimized to provide speed at minimum power .the set of multipoles with corresponding to the eigenvector with maximum eigenvalue leads to optimal swimming . in fig .5 we plot the nearly circular motion of the displacement vector at and for seven - eighth of the period , starting at . in fig .6 we show the radial displacement as a function of the polar angle at times and .this demonstrates the running wave character of the surface wave .the plot for the tangential displacement looks similar .the second order velocity follows from eq .this can be evaluated by use of eq .( 3.11 ) , which yields .\end{aligned}\ ] ] the time - average of this expression equals that given in eq .( 3.12 ) . in fig .7 we plot the ratio for the optimal stroke with displacement determined by the set of multipoles with .the maximum deviation from unity is about one percent .the second order rate of dissipation follows from eq .this can be evaluated by use of eq .( 3.16 ) , which yields .\ ] ] the time - average of this expression equals that given in eq .( 3.17 ) . in fig .7 we plot also the ratio for the optimal stroke with displacement determined by the set of multipoles with .it turns out that for this stroke equals unity within numerical accuracy .the second order flow velocity follows from the second order velocity at the surface , as given by eq .the latter can be expanded in terms of a complete set of outgoing waves , where takes the values , as indicated elsewhere .the modes with are accompanied by a pressure disturbance .the contribution for decays with a long range flow pattern falling off as .this must be cancelled by a stokes solution which vanishes on the sphere of radius and tends to as .the procedure can be performed straightforwardly , but we shall not present the details . in principle the perturbation expansion in powers of the surface displacement , as indicated in eq . ( 2.8 ) , can be extended to higher order in similar fashion .in the following we extend the analysis to more general flows .we consider motions for which to first order in the displacement the flow is axisymmetric and polar , so that in spherical coordinates the flow velocity and the pressure do not depend on , and has vanishing component . in general the solutions of the stokes equations for the flow about a sphere have been classified into three types indexed .the potential flows considered earlier are of type .we now consider in addition flows of type .for the potential flows the pressure disturbance vanishes , but the flows of type can not be expressed as the gradient of a scalar potential and there is a pressure disturbance . for an axisymmetric flow of type the flow velocity has only a component , andthe pressure disturbance vanishes .flows of this type do not contribute to the translational velocity of the sphere .the first order flow outside the sphere is expanded as ,\qquad r > a,\ ] ] with component field given by eq .( 3.6 ) , and given by .\ ] ] in the second sum in eq . ( 7.1 ) we must put , since the term with would correspond to a force .we have normalized such that at the function has the same radial component as .the solution is of type , the solution is of type .the corresponding first order pressure is with component pressure disturbance the multipole coefficients and in eq .( 7.1 ) can be expressed as the corresponding displacement is .\end{aligned}\ ] ] in the calculation of the mean swimming velocity , as given by eq .( 2.12 ) , we use the identities the first one is equivalent to eq .it follows that the mean swimming velocity is again given by a sum of products of adjacent multipole coefficients , \nonumber\\ & + & \frac{(l+1)(l+2)(2l-1)}{2l+3}\big[\kappa_{lc}\mu_{l+1,s}-\kappa_{ls}\mu_{l+1,c}\big]\nonumber\\ & + & \frac{(l+1)(l+2)(2l-1)}{2l+3}\big[\mu_{lc}\kappa_{l+1,s}-\mu_{ls}\kappa_{l+1,c}\big]\nonumber\\ & + & ( l+1)(l+2)\frac{(2l-3)(2l-1)}{(2l+1)(2l+3)}\big[\kappa_{lc}\kappa_{l+1,s}-\kappa_{ls}\kappa_{l+1,c}\big]\bigg].\end{aligned}\ ] ] we define the complex multipole moment vector as the one - dimensional array then can be expressed as with a dimensionless pure imaginary and antisymmetric matrix .the upper left - hand corner of the matrix , truncated at , reads we can impose the constraint by dropping the first element of and erasing the first row and column of the matrix .we denote the corresponding modified vector as and the modified matrix as .the rate of dissipation is expressed as a surface integral where is the first order stress tensor , given by in the calculation of the rate of dissipation we use the identities the first one is equivalent to eq . ( 3.16 ) . the time - averaged rate of dissipation is given by .\end{aligned}\ ] ] this can be expressed as with a dimensionless real and symmetric matrix .we denote the modified matrix obtained by dropping the first row and column by .the upper left - hand corner of the matrix , truncated at , reads if the elements corresponding to the multipole moments are omitted , then these results reduce to those obtained earlier for irrotational flows .we impose the constraint that the force exerted on the fluid vanishes at any time .this requires . with this constraintthe mean swimming velocity and the mean rate of dissipation can be expressed as optimization of the mean swimming velocity for given mean rate of dissipation leads to the eigenvalue problem the matrix is pure imaginary and antisymmetric and the matrix is real and symmetric . as in the case of potential flowswe truncate at maximum -value . the truncated matrices and are -dimensional .the structure of the eigenvalue equations is such that they can be satisfied for real eigenvalues by eigenvectors with components which are real for odd and pure imaginary for even .the complex conjugate of an eigenvector corresponds to the eigenvalue for the opposite sign .hence it suffices to consider the positive eigenvalues . in our plotswe have chosen the phase of the eigenvectors such that the first potential multipole moment is real and positive .with truncation at the eigenvalue problem is equivalent to that for two coupled linear harmonic chains with masses corresponding to the diagonalized form of the matrix .however , it is not necessary to perform this diagonalization explicitly , and it suffices to discuss eq .( 8.2 ) directly .it is of interest to consider the matrix along the diagonal direction of the matrix for large .diagonalization of this matrix shows that one of its eigenvalues is of order unity , whereas the second one grows as as increases . for the eigenvector corresponding to the first eigenvaluethe second component is nearly the opposite of the first , and for the second eigenvalue the two components are nearly equal .this suggests that the eigenvector with largest eigenvalue for the problem eq .( 8.2 ) for large is a mixture of flows of potential and viscous type with nearly equal and opposite amplitudes .this is confirmed by numerical solution of the eigenvalue problem for a large value of , say .if the optimal eigenvector is decomposed into potential and viscous components , corresponding to - and -moments respectively , then the norm of the viscous part is nearly equal to the norm of the potential part .it turns out that the inclusion of the viscous part has a dramatic effect on the maximum eigenvalue . in fig .8 we show the maximum eigenvalue as a function of , in analogy with fig .this shows that tends to a constant larger than 2 for large .we prove that the constant equals .the inclusion of viscous flows has led to a qualitative change .it is no longer sufficient to consider the asymptotically uniform linear chain as in sec .the asymptotic variation of couplings and masses along two coupled linear harmonic chains must be taken into account . with modified moments as in eq .( 4.8 ) the matrices along the diagonal of the corresponding matrices and linking the multipoles of order and in the limit of large take the form in comparison the large behavior of the matrices along the diagonal of the matrices and of sec. 4 is given by the eigenvalue problem has eigenvalues , , and the eigenvalue problem has the same eigenvalues , each twofold degenerate .however , the result is unstable under small perturbations , and the higher order terms of the matrix elements to order must be considered to obtain the correct result corresponding to the coupled linear chains . thus instead of eq .( 8.5 ) we consider the asymptotic behavior from eqs . ( 7.8 ) and ( 7.15 ) one finds that the matrices and are given by the matrices and are given by from the eigenvalue equation one finds that in the limit the eigenvalues tend to , each twofold degenerate . from the eigenvalue equation one finds that in the limit the eigenvalues tend to , each twofold degenerate .the largest eigenvalue is a factor larger than given below eq .hence for the complete problem with matrices and the maximum eigenvalue in the limit is a factor larger than obtained from the linear chain problem for potential flows of sec .the maximum eigenvalue for the present problem therefore tends to in the limit , as suggested by fig .thus with the inclusion of modes the efficiency of translational swimming defined in eq .( 5.1 ) takes the maximum value as in the case of potential swimming the optimum value is reached for a set of multipoles decaying in absolute magnitude as at large .this suggests that the maximization of leads to an optimum stroke which is not of physical relevance .we denote the eigenvector with maximum eigenvalue corresponding to the truncated matrices and as with normalization . as in sec . v we look for a different selection criterion for optimization of the stroke . for the more general axisymmetric flow patterns we find again that for the eigenvector the displacement at describes an ellipse in the plane given by eq .( 5.6 ) , but now with modified expressions for the coefficients and .more generally we consider arbitrary values of .we then find that in general the vector describes an ellipse in the plane which is tilted with respect to the axis .the shape and tilt of the ellipse are described conveniently by stokes parameters .the components and can be expressed as with complex amplitudes and given by where the stokes parameters of the ellipse at polar angle are defined by where for brevity we have omitted the subscript and the variable .the tilt angle of the ellipse is given by and the ellipticity follows from the long and short semi - axis of the ellipse are we find for each that the ellipse described by at for the stroke with maximum efficiency has its long axis parallel to the axis . thus if we represent the ellipse again by eq .( 5.6 ) then for multipoles given by the ellipse described by will have horizontal semi - axis and vertical semi - axis .we therefore consider the reduced speed and power at fixed horizontal amplitude of stroke , with in fig .9 we show the plot of for the optimal eigenvector as a function of , and in fig .10 we show the corresponding plot for the reduced power .the reduced power shows again a minimum , this time at , given by . at reduced amplitude is , and the reduced speed is . in fig .11 we plot the absolute values of the set of multipole moments for the optimal eigenvector with . for the set of complex multipoles the mean speed and rate of dissipationare performing the same estimate as at the end of sec .v for the more general class of flows with the optimum stroke for we find power watt and speed m / sec .the efficiency is , compared with the maximum possible for general flow . for wattwe find relative amplitude and speed m / sec .the nature of the optimum stroke for is shown in fig .12 , in analogy to fig .5 . the time - dependent swimming velocity and rate of dissipation can be evaluated in analogy to eqs .( 6.1 ) and ( 6.2 ) .the dimensionless ratios and for the optimal stroke with vary in time quite similarly to the behavior shown in fig . 7 . againthe ratio equals unity within numerical accuracy .basing ourselves on the stokes equations , rather than the linearized navier - stokes equations , we have developed a simpler discussion of the swimming of a sphere at low reynolds number with the restriction to potential flow solutions than was presented before .the identities eqs .( 3.11 ) and ( 3.16 ) play a crucial role .they imply that the representation of the flow in terms of electrostatic multipole potentials is particularly simple . in this representationthe matrix , from which the rate of dissipation is calculated , is diagonal , and the matrix , from which the swimming velocity is calculated , is tri - diagonal .correspondingly , the eigenvalue problem which yields the swimming stroke of maximum efficiency , is relatively simple .subsequently we have extended the derivation to the complete set of axisymmetric polar solutions of the stokes equations .an additional set of multipole moments corresponding to flows with vorticity needs to be introduced .although this leads to a doubling of dimensionality , the structure of the eigenvalue problem in the chosen representation remains fairly simple .the additional flow solutions allow a considerable enhancement of efficiency , defined as the dimensionless ratio of speed and power . as in the case of irrotational flow ,the maximum efficiency is attained for a stroke characterized by multipoles with a significant weight at high order .this indicates that the efficiency is not the most suitable measure of swimming performance .therefore we have considered a measure of performance based on a comparison of energy consumption for strokes with the same amplitude .the measure allows selection of a stroke with minimum energy consumption in a class of possible strokes .the optimal stroke selected in this manner involves multipoles of relatively low order and is expected to be of physical interest .although the spherical geometry provides only a crude approximation to the shape of most microorganisms , it has the advantage that the mechanism of swimming can be analyzed in great detail .the analysis shows that it is worthwhile to consider various measures of swimming performance .the mathematical formalism may serve as a guide in the study of more complicated geometry , such as a spheroid or an ellipsoid .99 g. i. taylor , proc .a * 209 * , 447 ( 1951 ) .m. j. lighthill , commun .pure appl . maths . * 5 * , 109 ( 1951 ) .j. r. blake , j. fluid mech . * 46 * , 199 ( 1971 ) .a. shapere and f. wilczek , j. fluid mech .* 198 * , 557 ( 1989 ) .a. shapere and f. wilczek , j. fluid mech .* 198 * , 587 ( 1989 ) .j. happel and h. brenner , _ low reynolds number hydrodynamics _( noordhoff , leyden , 1973 ) . b. u. felderhof and r. b. jones , physica a * 202 * , 94 ( 1994 ) . b. u. felderhof and r. b. jones , physica a * 202 * , 119 ( 1994 ) .d. b. dusenbery , _ living at micro scale _ ( harvard university press , cambridge ( mass . ) , 2009 ) .j. d. jackson , _ classical electrodynamics _ ( wiley , new york , 1989 ). j. a. sparenberg , j. eng . math . * 44 * , 395 ( 2002 ) .a. r. edmonds , _ angular momentum in quantum mechanics _ ( princeton university press , princeton ( n.j . ) , 1974 ) .m. abramowitz and i. a. stegun , _ handbook of mathematical functions _ ( dover , new york , 1965 ) .s. childress , _ mechanics of swimming and flying _( cambridge university press , cambridge , 1981 ) .h. tennekes , _ the simple science of flight _ ( mit press , cambridge ( mass . ) , 2009 ) .b. cichocki , b. u. felderhof , and r. schmitz , physicochem . hyd . * 11 * , 507 ( 1989 ) . c. f. bohren and d. r. huffman , _ absorption and scattering of light by small particles _( wiley , new york , 1983 ) .plot of the components of the eigenvector with largest eigenvalue , normalized to unity , for . the corresponding multipoles with follow from eq . ( 4.2 ) .the values for even and the values for odd vanish .plot of the reduced speed for fixed maximum amplitude of the displacement at as a function of . at each value of most efficient set of multipoles for swimming via irrotational flow is considered .plot of the reduced power for fixed maximum amplitude of the displacement at as a function of . at each value of most efficient set of multipoles for swimming via irrotational flow is considered .plot of the end of the displacement vector at and for maximum amplitude of the displacement at equal to for the optimum eigenvector for with complex multipoles .the motion is depicted with start at and finish at , where .the endpoint is marked by a small circle .plot of the radial displacement for maximum amplitude as a function of polar angle for ( solid curve ) , ( long dashes ) , and ( short dashes ) . a running wave can be discerned .plot of the ratio as a function of time for swimming motion corresponding to the optimal set of multipoles with ( solid curve ) .we also plot the ratio for the same swimming motion .this equals unity within numerical accuracy .plot of one - half the maximum eigenvalue for sets of complex multipoles with and as a function of for .the values tend to as .plot of the reduced speed for fixed maximum amplitude of the displacement at as a function of . at each value of most efficient set of multipoles is considered .plot of the reduced power for fixed maximum amplitude of the displacement at as a function of . at each value of most efficient set of multipoles is considered .plot of the non - vanishing components of the eigenvector with largest eigenvalue , normalized to unity , for a set of complex multipoles with and .the absolute values of the are indicated by squares and those of the are indicated by dots .plot of the end of the displacement vector at and for maximum amplitude of the displacement at equal to for the optimum eigenvector for with complex multipoles .the motion is depicted with start at and finish at , where .the endpoint is marked by a small circle .
swimming velocity and rate of dissipation of a sphere with surface distortions are discussed on the basis of the stokes equations of low reynolds number hydrodynamics . at first the surface distortions are assumed to cause an irrotational axisymmetric flow pattern . the efficiency of swimming is optimized within this class of flows . subsequently more general axisymmetric polar flows with vorticity are considered . this leads to a considerably higher maximum efficiency . an additional measure of swimming performance is proposed based on the energy consumption for given amplitude of stroke .
the diffusion is one of the most studied and spread processes in science . the einstein s description about the erratic motion of small particles on fluid surfaces , the historical brownian motion ( bm ) , lead to a proof that matter is constituted by atoms and molecules in constant motion in according to the kinetic - theory .other numerous physical systems exhibit some diffusion process as the spreading of dengue by migration of infected individual or mosquitoes , the diffusion - limited aggregation applied to growth phenomena , the analysis of financial data on stock market , the model of protein folding , the fixational eye movements and many more .many of these stochastic processes can be studied using the classical random walk ( rw ) .it is not easy to obtain an intuitive comprehension of the stochastic phenomena , because this usually requires advanced mathematical tools .actually , several teachers presenting a good pedagogical approach by using numerical experiments of low computational cost , and that are available to students with access to a computer . also , many other numeric - experiments can be used to complement real experiences .the use of computational technologies is explored by many educators that propose strategies to elaborate good tools for supporting the cognitive development of the students .such technologies have been widely used in the physics teaching . to minimize the difficulties of teaching and learning physical phenomena, some teachers has added to your classes some softwares able to determine the evolution of the equations , in order to create simulations and animations of present phenomena , or even , to automatization of the experimental data acquisition , to modeling and interacting with virtual physical environments .therefore , in this paper , the simulation of the two - dimensional random walk movement of particles on fluid surface , in order to study the brownian motion , was done by using computational animations . to accomplish this task , we use the free software for 2d simulation known as algodoo of algoryx simulation ab .this software has an interactive environment , allowing creation of experimental scenarios , as animated movies , but with having as feedback , the equations and physical properties imposed to the simulations .the algodoo is an easy manipulation tool , and does not require specific knowledge about computer programming or training to realize tasks with the software , which allow easy learning by students . in a recent work of our research group we present the potentiality of that software as a tool for teaching and learning physics by considering the launching projectiles animations .we believe that the complementation of mathematical description of the brownian motion by using the animations with algodoo will provide to the students a better learning about all techniques involved in this stochastic model . with this project we can also create the basis for the theoretical instrumentation for studying others diffusive process when we use the algodoo s animations for the brownian motion .this paper is organized as follows ; firstly , we will introduce some of the basic properties of random walks in two - dimension for an active particle with random direction velocity in a two - dimensional homogeneous environment . for convenience, we will use the familiar picture of one diffusing particle .then the animations were built by using the algodoo software .next , from the animations , we will show how to calculate the diffusion coefficient by using two methods : the mean - square - displacement and the displacement histogram of the brownian particle . andfinally , we show that the random walk animations provide a clear understanding of transition between the ballistic behavior and the diffusive behavior of brownian motion .the brownian system is constituted by a suspended particle in a fluid with random motion resulting for their collision with the atoms and molecules of the fluid .since , is not an easily task to simulate a fluid , because this simulation involve to determine the solutions of the euler and navier - stokes equations .so , in our animations the fluid is formed by small blue disks moving randomly . the manufacture of this fluid consist in to create a little set of identical disks and to attribute to them velocity with different magnitude and direction randomly .to increase the number of disks of the fluid the process are repeated several times until we get a desired concentration .the null friction and maximum elasticity in the collision between the disks is considered , to maintain constant the mean kinetic energy of system .these characteristics can be established selecting all disks and to edit in the item _ material _ " of the algodoo software .a red disk can be introduced in any region of the system , and represents our `` brownian '' particle .the particles of the fluid ( red disks ) has a diameter defined much bigger than the particles in suspension ( blue disks ) in order to enhance the contrast . for the present animationwas estabilished , by convinience , a flat rectangular region of , where were uniformly distributed 7.2 blue disks forming the system fluid .each blue disk has a mass of g and occupies an area of 12.0 .intensities of velocities were distributed between 0.1 m / s e 5.5 m / s with random directions . the brownian particles , i.e. , the red disk , has a mass of 70 g and area of 3.0 .these specification were choose arbitrary and do not represent any material or specific system .this software has a limitation to assign small dimensions and mass in the drawn objects .our aim is to create fictitious animations of the random walk movement to study the brownian motion properties . in the fig .1 ( a ) we present the illustration of the environment built to observe the brownian motion . when we start the animations in the algodoo environment , the simulate dynamics of the blue disks a random motion of the molecules in a fluid ( fig . 1 ( b ) ) . to see trajectories of the red disk we use the tool _ tracer _ " . this tool draws a line by all the trajectory of the selected object .the trajectory of the motion can be observed in the fig . 1 ( b ) by a path illustrated in a red line .theoretically , the motion in the _ x _ and _ y _ axes obeys the classical random walk with the two - dimensional mean displacement null and the mean square displacement ( msd ) is given by : where and are the one - dimensional msd , _ d _ is the diffusion coefficient and _ t _ the time between the points of the data .thus , the msd depends of the diffusion coefficient and time .the root mean square ( rms ) displacement = is proportional to , allowing that the red disk visits quickly _ ( t ) _ the surrounding area of the starting point and that requires a long times _ ( t ) _ to get long distances .the bm presents a transition between a smooth ballistic behavior to the diffusive behavior .the scaling - time of the transition is given by the relaxation time . in real physical systems , the relaxation time is ordinarily very short , typically in the order of some microseconds or nanoseconds . for a short interval of time ) , has a quadratic time dependence , and for long times ( ) this dependence becomes linear . even the theory was proposed in 1905 , the transition between ballistic and diffusive behavior was experimentally proofed only in 2010 , when was possible to measure the position of the particle to study the instantaneous velocity and the transition of the ballistic to the diffusive behavior . to study the bm by animation with algodoo we use the graphical tool _ show plot _ " . with this toolwe can chose one of the variables used in the animation and present them in a graphic , as illustrated in the fig .2 . in the case of bm , the time evolution of the horizontal _x(t ) _ and the vertical _ y(t ) _ positions of the brownian disk , was treated as time series . the option _ show plot _" exhibits the _ x _ and _ y _ positions of the disk as a function of time , that can be observed in the fig .2 items ( _ a _ ) and ( _ b _ ) respectively . with this toolis possible to save a spreadsheet in an extension .csv ( item ( _ c _ ) of the fig .2 ) with a frequency of 60 points per second . was used one hundred brownian disks and monitored the time evoluting of each disk during ten seconds . since any disk presents a different initial condition in the one hundred csv fileswere produced containing the time series of the brownian disks , and the time series were used to analyze the bm . in our animations , the diffusion coefficient _ d _ obtained from two methods .the first method is the evaluation of the graphic of the probability - distribution of the particle position .using this graphic we can fit the gaussian curve by using : the variance in the brownian motion is equal to the msd , since the mean displacement is null in this case . we can also use the variance from the displacements distribution to calculate _d _ by equation of one - dimensional random walk : using any software that analyze mathematically the spreadsheet , as the microsoft excell , open office or origin , we can do the mathematical treatment of the time series , as desired . by calculating the horizontals and verticals displacements and it is possible to create a list and to produce the graphic of the probability distribution of the particle displacements , which will be fitted using equation 3 .the second method consists in calculate the two - dimensional msd for each instant of time t by : .\label{eq06}\ ] ] where the cartesian pair represents the initial position of the particle in each sample and _ n _ is the number of samples .the diffusion coefficient _ d _ is the slope of the linear fit of the curve versus _t _ , according to equation 1 . to demonstrate the transition between the ballistic and diffusive behavior we use the msd curve for a short interval of time , generated by the second method . from the graphic of versus _t _ we can demonstrate the quadratic and linear temporal behavior of and from this graphic the relaxation time is obtained . in laboratories classes for undergraduate students , an experiment about a typical bmrequires that the student register the motion of at least ten particles for measure the diffusion coefficient and/or the relaxation time .experiments of bm in advanced stages became significant , but the cost of preparation and the quantification of these experiments are difficult , usually producing deviations about 10 to 15 or even more . in practical terms , the study of the brownian motion by computational experiments minimize largely the experimental problems .in this scenario , the algodoo software is a good option and it stand out by easier manipulation , and does not requiring a specific programming knowledge .we do not propose here the elimination of bm experiments , but we present a new tool to support the teaching / learning of physics by animations built with algodoo ./s . ] with the animations done , teachers can establish with their students a great opportunity for discussions about the excitement of the small disks associating qualitatively to thermal excitement of the hypothetical fluid .such thermal excitement produces continuous collisions with the red disk located in that region , causing an erratic motion , characterizing the brownian motion ( fig .1 ( a ) e ( b ) ) . due to the fact that this software can be easy manipulate , teachers can pause , increase or decrease the speed of the animations , or also modify the environment characteristic to enrich the qualitative presentation of the motion .as said in previous section , the time series of the brownian disk positions is saved , by _ show plot _ " tool , in an extension .csv where data are analyzed by worksheet software .we have executed one hundred independently brownian motions for 10 s. during this interval of time we have exclude all initial points till 1.5 s. until approximately 0.8 s we observe that the animations do not get the random walk state and consequently it can disturb the analysis of the motion .these observations are done for the animation with the initial conditions described in the previous section .if those conditions change , can one realize new analysis of the initial time interval .therefore , we have established a 1.5 s , for convenience . by using method ,we have fitted a gaussian curve ( equation 3 ) on the displacement distribution graphics in the fig .the histogram is built from the independently one - dimensional displacements and together .the variance of the adjusted gaussian curve is equal to the mean - square of one - dimensional displacement , and we can use this to calculate the diffusion coefficient by using equation 5 .the obtained value from the displacement histogram is _ d _= 0.221 m/s . /s . ] after the square position displacements were calculated for all brownian disks in the animation ( fig . 4 grey curves ) , we calculate the average over all samples , i.e. , msd ( fig .4 black curve ) , by equation 6 .the diffusion coefficient is determined by the slope of msd as function of time by using equation 1 , where we obtain _ d _ = 0.205 m/s ( using method ) .we found a difference of about 8 between the methods . for a short time interval ( ) the dynamics of the brownian diskis governed by the translacional inertia and the motion is in ballistic regime .is demonstrated in figure 5 that msd has a parabolic time dependence ( curve with red points ) differing from the typical diffusive motion ( black points ) , where msd has a linear dependence .the relaxation time was availed at = 0.15 s. we have adjusted a curve with the form (t ) = c+bt+a on the parabolic region and we obtain c = 1.9 / , b = 3.5 /s and a = -1.9 .changing the animation of the bm it s possible to generate others diffusion regimes , that can be used to teach other diffusive phenomena , as electrical diffusive motion on semiconductor materials . ) with - adjust ( red curve ) with .( black ) for long times ( ) becomes linear .the relaxation time was estimated at = 0.15 s. ] these animations can be used in several levels in the school . for the high school level , the animations of the bm can be used to illustrate the random motion of a particle suspended on a fluid . for undergraduate levels , the animations can be also a tool to facilitate comprehension of mathematics procedures involved in the bm studying .from the data collected by algodoo , the teacher can elaborate scripts to teach the students how to obtain physical quantities from the time series , the same way as is done in laboratory experiments .also , it would be an interesting introduction to the concepts of statistical physics , normal distribution , gaussian curve , and so on which is very important in many areas of physics , from thermodynamics to quantum mechanics . for higher levels , students can learn scientific procedures for formulation of models , collecting and data treatment .this basic knowledge supports students , beginning in the science world , by development of research projects .in this paper , we showed a computational tool of great potential for teaching and learning physics , using the freeware algodoo . using an animation based on the two - dimensional random walk movement of particles on fluid surface we can study the basic concepts of the brownian motion as a great support - tool for teaching / learning of statistical physics .the animations presented in this paper provide to teachers and students a simple tool for quantitative and quantitative analyzes of the bm .it is possible to observe and to discuss the the random walk movement of the small disks resulting in an erratic motion , and to associate ones qualitatively to the thermal excitement of a hypothetical fluid , which characterize the bm .the diffusion coefficient was calculated from the animations by two methods : method fitting the graphic of the distribution of the independently one - dimensional displacements and together by a gaussian curve . and method by the fitting the graphic msd as function of the time , using equation 1 .it was found a difference of approximately 8 between these two methods .we have demonstrated that for a short interval of time ( ) , is proportional to _ t _ and the motion presents a ballistic behavior , were we have a movement of the particle without suffering collisions in the path . for long times , is proportional to _ t _ , and the motion is diffusive , movement suffering collisions along the path .the time of relaxation was availed as = 0.15 s. the algodoo is easy to manipulate and does not require any specific knowledge in programming or training to realize the tasks in the software . through this environment , educators and students can explore all potentialities of the studied theme , proposing modifications : in the size of the particles , in the intensity of the velocities , imposing boundary conditions ; to the system and discussing their consequences : enhance of the relaxation time , reduction of the diffusive coefficient , and so on . many diffusive systems can be explored by these initial proposal of an animation of the brownian motion . the didactic strategy for combining analytical approaches and animations , supports teaching and learning processes . in this wayalgodoo also show one as a support - tool that becomes the teaching / learning process of physics simpler and fruitful when compared to others software of animations and learning .the authors thanks faperj for the finantial support of this work .zachariadou , k. , yiasemides , k. & trougkakos , n. [ 2012 ] `` a low - cost computer - controlled arduino - based educational laboratory system for teaching the fundamentals of photovoltaic cells , '' _ eur .j. phys . _* 33 * , 1599 - 1610 .de souza , a. r. , paixo , d. d. , uzda , a. c. , dias , m. a. , duarte , s. & amorim , h. s. [ 2011 ] `` the arduino board : a low cost option for physics experiments assisted by pc , '' _ rev .fis . _ * 33 * , 1702 .da silva , s. l. , da silva , r. l. , guaitolin junior , j. t. , gonalves , e. , viana , e .r. and wyatt , j. b. l. [ 2014 ] `` animation with algodoo : a simple tool for teaching and learning physics , '' exatas online * 5 * , 28 - 39 .
in this work animations of the random walk movement using a freeware algodoo were done in order to support teaching the concepts of brownian motion . the random walk movement were simulate considering elastic collision between the particles in suspension in a fluid , and the particles which constitute the fluid . the intensity of velocities where defined in an arbitrary range , and we have a random distribution of the velocity directions . using two methods , the distribution histogram of displacements ( dhd ) and the mean - square - displacement ( msd ) , it was possible to measure the diffusion coefficient of the system , and determine the regions where the system presents ballistic regime or diffusive transport regime . the ballistic regime was observed graphically when the msd has a parabolic dependence with time , which differing from the typical diffusive regime where msd has a linear dependence . the didactical strategy for combining analytical approaches as graphic analysis , and animations in software s with easy implementation supports the teaching and learning processes , especially in physics were we want to explain experimental results within theoretical models . brownian motion ; random walk ; diffusion coefficient ; animation ; algodoo ; teaching of physics .
randomized algorithms have established themselves as some of the most competitive methods for rapid low - rank matrix approximation , which is vital in many areas of scientific computing , including principal component analysis and face recognition , large scale data compression and fast approximate algorithms for pdes and integral equations . in this paper, we consider randomized algorithms for low - rank approximations and singular value approximations within the subspace iteration framework , leading to results that simultaneously retain the reliability of randomized algorithms and the typical faster convergence of subspace iteration methods . given any matrix with , its singular value decomposition ( svd ) is described by the equation where is an column orthogonal matrix ; is an orthogonal matrix ; and with .writing and in terms of their columns , then and are the left and right singular vectors corresponding to , the -th largest singular value of . for any ,we let be the ( rank- ) truncated svd of .the matrix is unique only if .the assumption that will be maintained throughout this paper for ease of exposition .our results still hold for by applying all the algorithms on .similarly , all our main results are derived under the assumption that .but they remain unchanged even if , and hence remain valid by a continuity argument .all our analysis is done without consideration of round - off errors , and thus need not hold exactly true in finite precision , especially when the user tolerances for the low - rank approximation are close to machine precision levels .additionally , we assume throughout this paper that all matrices are real . in general, is an ideal rank- approximation to , due to the following celebrated property of the svd : [ thm : truncsvd ] ( eckart and young , golub and van loan ) while there are results similar to theorem [ thm : truncsvd ] for all unitarily invariant matrix norms , our work on low - rank matrix approximation bounds will only focus on the two most popular of such norms : the 2-norm and the frobenius norm .theorem [ thm : truncsvd ] states that the truncated svd provides a rank- approximation to with the smallest possible 2-norm error and frobenius - norm error . in the 2-norm, any rank- approximation will result in an error no less than , and in the frobenius - norm , any rank- approximation will result in an error no less than . additionally , the singular values of are exactly the first singular values of , and the singular vectors of are the corresponding singular vectors of .note , however , that while the solution to problem ( [ eqn : truncsvdf ] ) must be , solutions to problem ( [ eqn : truncsvd2 ] ) are not unique and include , for example , the rank- matrix defined below for any : this subtle distinction between the 2-norm and frobenius norm will later on become very important in our analysis of randomized algorithms ( see remark [ rem : fvs2 ] . ) in theorem [ thm : fto2 ] we prove an interesting result related to theorem [ thm : truncsvd ] for rank- approximations that only solve problems ( [ eqn : truncsvd2 ] ) and ( [ eqn : truncsvdf ] ) approximately . to compute a truncated svd of a general matrix ,one of the most straightforward techniques is to compute the full svd and truncate it , with a standard linear algebra software package like the lapack .this procedure is stable and accurate , but it requires floating point operations , or _flops_. this is prohibitively expensive for applications such as data mining , where the matrices involved are typically sparse with huge dimensions . in other practical applications involving the truncated svd , oftenthe very objective of computing a rank- approximation is to avoid excessive computation on .hence it is desirable to have schemes that can compute a rank- approximation more efficiently . depending on the reliability requirements ,a good rank- approximation can be a matrix that is accurate to within a constant factor from the optimal , such as a rank - revealing factorization ( more below ) , or it can be a matrix that closely approximates the truncated svd itself .many approaches have been taken in the literature for computing low - rank approximations , including rank - revealing decompositions based on the qr , lu , or two - sided orthogonal ( aka utv ) factorizations .recently , there has been an explosion of randomized algorithms for computing low - rank approximations .there is also software package available for computing interpolative decompositions , a form of low - rank approximation , and for computing the pca , with randomized sampling .these algorithms are attractive for two main reasons : they have been shown to be surprisingly efficient computationally ; and like subspace methods , the main operations involved in many randomized algorithms can be optimized for peak machine performance on modern architectures . for a detailed analysis of randomized algorithms and an extended reference list ,see ; for a survey of randomized algorithms in data analysis , see .the subspace iteration is a classical approach for computing singular values .there is extensive convergence analysis on subspace iteration methods and a large literature on accelerated subspace iteration methods . in general , it is well - suited for fast computations on modern computers because its main computations are in terms of matrix - matrix products and qr factorizations that have been highly optimized for maximum efficiency on modern serial and parallel architectures .there are two well - known weaknesses of subspace iteration , however , that limit its practical use . on one hand, subspace iteration typically requires very good separation between the wanted and unwanted singular values for good convergence . on the other hand, good convergence also often critically depends on the choice of a good start matrix .another classical class of approximation methods for computing an approximate svd are the krylov subspace methods , such as the lanczos algorithm ( see , for example . )the computational cost of these methods depends heavily on several factors , including the start vector , properties of the input matrix and the need to stabilize the algorithm .one of the most important part of the krylov subspace methods , however , is the need to do a matrix - vector product at each iteration .in contrast to matrix - matrix products , matrix - vector products perform very poorly on modern architectures due to the limited data reuse involved in such operations , in fact , one focus of krylov subspace research is on effective avoidance of matrix - vector operations in krylov subspace methods ( see , for example . ) this work focuses on the surprisingly strong performance of randomized algorithms in delivering highly accurate low - rank approximations and singular values .to illustrate , we introduce algorithm [ alg : randsami ] , one of the basic randomized algorithms ( see . )[ alg : randsami]*basic randomized algorithm * + [ cols= " < , < " , ] given a set of terms ( a query ) , lsi attempts to find the document that best matches it in some semantical sense . to do so, lsi computes a rank- truncated svd of the term - document matrix so that . for any query vector ,compute the feature vector .the document that most matches is the row of that is the most parallel to .we use the tdt2 text data .the tdt2 corpus consists of data collected during the first half of 1998 and taken from 6 sources , including 2 newswires ( apw , nyt ) , 2 radio programs ( voa , pri ) and 2 television programs ( cnn , abc ) .it consists of 11201 on - topic documents which are classified into 96 semantic categories .what is available at is a subset of this corpus , with a total of 9,394 documents and over terms .[ fig : lsi ] we performed random queries with the truncated svd for different values of .then we repeat the same queries with the low - rank approximation computed by algorithm [ alg : randsiad ] for and a decreasing set of values . for each and , algorithm [ alg : randsiad ]automatically stops once column samples have been reached in computing the low - rank approximation .table [ tab : lsi ] clearly indicates that better accuracy in randomized algorithms leads to more agreement with the truncated svd in terms of query matches .due to the nature of this experiment , an agreement does not always mean a better match .however , table [ tab : lsi ] does give some indication that better accuracy in the low - rank approximation is probably better for lsi .since looks significantly better than , this example indicates that for lsi , it may be necessary to use algorithm [ alg : randsiad ] with a small but positive value for best performance .we begin with the following probability tool .( chen and dongarra ) [ lem : chendon ] let be an standard gaussian random matrix with , and let denote the probability density function of , then satisfies : the following classical result , the _ law of the unconscious statistician _, will be very helpful to our analysis .[ prop : law ] let be a non - negative continuously differentiable function with , and let be a random matrix , we have we also need to define the following functions where and are constants to be specified later on .it is easy to see tht and , and * proof of proposition [ lem : omega2bound ] : * define a function . then by proposition [ lem : fnormexpv ], we have is a lipschitz function on matrices with lipschitz constant ( see theorem [ thm : gaussfun ] ) : for equation ( [ eqn : svab1 ] ) , we can rewrite , by way of function in ( [ eqn : gghat ] ) and proposition [ prop : law ] , by theorem [ thm : gaussfun ] , we have for .putting it all together , where in the last equation we have used the fact that and that . comparing equations ( [ eqn : svab1 ] ) and ( [ eqn : svb1a ] ) ,it is clear that we need to seek a so that for all values of .this is equivalent to or which becomes for , the right hand side reaches its maximum as approaches .hence it suffices to choose such that which solves to for and , we have .the last equation for is easily satisfied when we choose .we will now take a similar approach to prove equation ( [ eqn : svab2 ] ) .we rewrite , by way of function in ( [ eqn : gghat ] ) , since for , we now have comparing equations ( [ eqn : svab2 ] ) and ( [ eqn : svb1b ] ) , we now must seek a so that for all values of .equivalently , or which is the same as the right hand side approaches the maximum value as approaches .hence must satisfy which solves to again the choice satisfies this equation .* q.e.d . * the proof for proposition [ lem : omega1bound ] will follow a similar track . however , due to the complications with , we will seek help from lemma [ lem : largdev ] instead of theorem [ thm : gaussfun ] to shorten the estimation process .* proof of proposition [ lem : omega1bound ] : * as in the proof of proposition [ lem : omega2bound ] , we can write by lemma [ lem : largdev ] we have for any , following arguments similar to those in the proof of proposition [ lem : omega2bound ] , we have for a constant to be later determined , below we will derive lower bounds on ( [ eqn : bnd1 ] ) for the three difference cases of in proposition [ lem : omega1bound ] .for , equation ( [ eqn : bnd1 ] ) can be simplified as we now seek a so that for all values of .this condition is very similar to equation ( [ eqn : condelta ] ) .arguments similar to those used to solve ( [ eqn : condelta ] ) lead to the choice satisfies this equation for now we consider the case .we rewrite equation ( [ eqn : bnd1 ] ) in light of equation ( [ eqn : calfac1 ] ) in [ sec : apprelim ] : to prove proposition [ lem : omega1bound ] , we just need to find a constant so that where the asymptotic term behaves like when is tiny and like when is very large .equation ( [ eqn : bnd1a ] ) is equivalent to all the extra terms involving the function have added much complexity to the above expression .we cut it down with equations ( [ eqn : calfac5 ] ) and ( [ eqn : calfac6 ] ) in appendix [ sec : apprelim ] by replacing all relevant expressions involving by their corresponding calculus upper bounds .this gives or , which holds for and the last case for our lower bound in proposition [ lem : omega1bound ] is . with equation ( [ eqn : calfac2 ] ) in [ sec : apprelim ]: and the choice , equation ( [ eqn : bnd1 ] ) reduces to for it is now time to prove equation ( [ eqn : svab4 ] ) .our approach for is similar .we rewrite , by way of function in ( [ eqn : gghat ] ) , since for any , we now have similarly , we seek a so that this last equation is very similar to equation ( [ eqn : svb1c ] ) , with the only difference being the coefficients in the second term on the left hand side .thus its solution similarly satisfies again , the value satisfies this equation for the special cases and lead to some involved calculations with lemma [ lem : largdev ] .instead , we will appeal to lemma [ lem : chendon ] , an upper bound on the probability density function of smallest eigenvalue of the wishart matrix .it is a happy coincidence that this upper bound is reasonably tight for . by lemma [ lem : chendon ] , the integral in equation ( [ eqn : chendon2 ] ) can be bounded as below we further simplify equation ( [ eqn : chendon3 ] ) . for , the integral in ( [ eqn : chendon3 ] )becomes , according to equation ( [ eqn : calfac3 ] ) in [ sec : apprelim ] : replacing the integral in equation ( [ eqn : chendon3 ] ) , and plugging the resulting upper bound into equation ( [ eqn : chendon2 ] ) , we obtain the desired equation ( [ eqn : svab4 ] ) for . finally we consider the case .the integral in equation ( [ eqn : chendon3 ] ) can be rewritten as where we have used the substitution . applying the inequality to both factors in the denominator above , and utilizing the identity ( [ eqn : calfac4 ] ) from [ sec : apprelim ], we bound the integral from above as which leads to the desired equation ( [ eqn : svab4 ] ) for .* q.e.d . *here we list the facts we have used from calculus .their proofs have been left out , since they do not provide any additional insight into our analysis .we start with definite integrals : where , are all positive constants .we will also list the following inequalities for any : p. drineas , m. w. mahoney , and s. muthukrishnan .subspace sampling and relative - error matrix approximation : column - based methods . in j.diaz and _ et al ._ , editors , _ approximation , randomization , combinatorial optimization _ , volume 4110 of _ lncs _ , pages 321326 , berlin , 2006 .springer .e. liberty , n. ailon , and a. singer .dense fast random projections and lean walsh transforms . in a.goel , k. jansen , j. rolim , and r. rubinfeld , editors , _ approximation and randomization and combinatorial optimization _ , volume 5171 of _ lecture notes in computer science _ , pages 512522 , berlin , 2008 . springer .
a classical problem in matrix computations is the efficient and reliable approximation of a given matrix by a matrix of lower rank . the truncated singular value decomposition ( svd ) is known to provide the best such approximation for any given fixed rank . however , the svd is also known to be very costly to compute . among the different approaches in the literature for computing low - rank approximations , randomized algorithms have attracted researchers recent attention due to their surprising reliability and computational efficiency in different application areas . typically , such algorithms are shown to compute with very high probability low - rank approximations that are within a constant factor from optimal , and are known to perform even better in many practical situations . in this paper , we present a novel error analysis that considers randomized algorithms within the subspace iteration framework and show with very high probability that highly accurate low - rank approximations as well as singular values can indeed be computed quickly for matrices with rapidly decaying singular values . such matrices appear frequently in diverse application areas such as data analysis , fast structured matrix computations and fast direct methods for large sparse linear systems of equations and are the driving motivation for randomized methods . furthermore , we show that the low - rank approximations computed by these randomized algorithms are actually rank - revealing approximations , and the special case of a rank- approximation can also be used to correctly estimate matrix -norms with very high probability . our numerical experiments are in full support of our conclusions . * key words : * low - rank approximation , randomized algorithms , singular values , standard gaussian matrix .
about forty years ago landauer discussed the minimum energy necessary for computation .his conclusion is that [ l ] erasure of information is accompanied by _ heat _generation to the amount of .it seems that his theory has been accepted widely . in our opinion , however , a more precise expression must be that [ if ] erasure of information is accompanied by _generation .the aim of the present paper is not to compare the above two statements but to point out some new concepts in the field of thermodynamics which are implicitly included in statement [ if ] .the new concepts that we will introduce are `` partitioned state '' ( , which corresponds to frozen state such as in ice ) , `` partitioning process '' and `` unifying process '' .we first explain [ if ] , namely , our thermodynamics of computation .it will be pointed out that the so - called `` residual entropy '' does not exist in the partitioned state .we then argue that a partioning process is an entropy decreasing process .finally we reconsider the second law of thermodynamics .a hardware with one - bit memory corresponds to a bistable physical system ( , for example , a particle in a double - well potential ) .the system can take two stable states which we call state `` 0 '' and state `` 1 '' .in the situation that the system stays in one of the stable states and never moves to the other stable state , we say that the system is in a `` partitioned state '' .this can be realized if the temperature is low enough in comparison with the potential barrier separating the two stable states .there are two partitioned states corresponding to `` 0 '' and `` 1 '' .the system functions as a memory device in one of the partitioned states , by keeping one bit of information .thus we will call the partitioned states as `` m(memory)''- state which can be `` 0 '' or `` 1 '' .if the potential has been modified from double - well to single - well , the system can take only one stable state .then we say that the system is in a `` unified state '' . by the change of the system from a partitioned state to a unified state , one bit of information is lost .we will call the unified state as `` n(neutral)''-state .we will study the thermodynamics of computation by using two models for a one - bit memory device .this is an idealized model of a memory device and is helpful to see the essence of physics in computational processes .the engine consists of a molecule , a cylinder , two pistons and a partition sheet ( fig .1 ) .the molecule is interacting with a heat bath with the temperature .when the partition sheet is inserted , the engine is in one of `` m''-states , i.e. , `` 0 '' or `` 1 '' . in state`` 0 '' ( `` 1 '' ) , the molecule is in the left - hand ( right - hand ) side . when the partition sheet is removed , the engine is in `` n''-state .differences between thermodynamical quantities in `` m''- and `` n''-states are `` writing process '' is as follows : the initial state is the `` n''-state .an agent who performs a computation pushes one of the two pistons to the center of the cylinder , inserts the partition sheet and returns the piston to the starting position . in this processthe agent does some work on the molecule , which amounts to during this process the entropy moves from the system to the heat bath , i.e. , the environment , corresponding to which the heat generation occurs : evidently , the `` writing process '' is reversible .there are two kinds of deleting process .the first one is the inverse of the writing process. then the work done by the agent and the heat generation in the environment are given by therefore the energy necessary for one cycle ( `` writing '' + `` reversible - deleting '' ) is zero and so is the net heat generation : however , the above deleting process is possible only when the agent knows the content of the memory , i.e. , in which state of `` 0 '' and `` 1 '' the system is .this implies that the same information remains in some other memory device after the deletion . in this processthe agent simply pulls out the partition sheet .obviously the agent can do it without any information on the device .both the work and the heat generation are zero in this case : but the entropy is produced in the engine which amounts to the cost of energy for one cycle ( `` writing''+``irreversible deleting '' ) and the net heat generation are given by this model ( see fig .2 ) is mathematically equivalent to a quantum flux parametron ( qfp ) device invented by goto . instead of pushing the pistons in the szilard engine a bias potentialis applied in this model .the partition sheet is replaced by a potential hill at the center .it is easily shown that the necessary energy and the net heat for one cycle ( `` writing '' + `` deleting '' ) are the same as those in the szilard engine : it should be noted that the amount of entropy generated in the `` irreversible - deleting process '' is equal to that for the szilard engine .the reason is as follows .entropy is generated when the height of the central hill is lowered so that the particle can jump into the other well by thermal noize . at this momentthe volume of the phase space where the particle can walk around expands twice .this means of entropy is generated .we summarize the thermodynamics of computation as follows .deletion of information is physically realized by a `` unifying process '' of a `` partitioned state '' , which means that the barrier ( the partition sheet in the szilard engine , the potential hill in the bistable - monostable system ) is removed and the two separated phase spaces are unified .this process is irreversible and accompanied by the entropy production .it is widely believed that a material whose ground state is degenerate has of residual entropy at low temperatures , where is its degeneracy .a well - known example is ice whose is almost equal to , where is the number of hydrogen atoms .the potential energy of a hydrogen atom has two minimum points between the neighboring two oxygen atoms , which is the origin of the degeneracy . at low temperatureseach hydrogen atom is localized at one of the two minimum points .therefore the ice is in a partitioned state in our terminology .if the ice really has residual entropy , our memory device also must have of entropy in `` m''- state where `` m '' is `` 0 '' or `` 1 '' .however this contradicts the entropy production in the `` irreversible - deleting process '' .we believe that the thermodynamics of computation described in the previous section really holds .thus we conclude that the residual entropy does not exist !a `` unifying process '' is an entropy generating process as stated above .then the inverse of the unifying process , that is , a `` partitioning process '' must be an entropy decreasing process .the simplest example of this process is to insert the partition sheet when the szilard engine is in the `` n''-state .it is evident that the entropy decreases by .so far thermodynamics has not taken into account partitioning processes . in the following we are discussing how the second law should be modified when the partitioned state is involved .there are several axioms , theorems or lemmas expressing the second law .we classify them into four propositions and examine them one by one . 1 clausius s principle , thomson s principle and caratheodory s principle .these principles hold even when the partitioned states are involved .namely , it is not necessary to modify 1 . 2 clausius s inequality : for any realizable cycle of a system interacting with an external environment , the inequality holds , where is the heat that flowed into the system and is the temperature of the heat bath . if all of the processes are _reversible processes _ , the equality holds .the proposition 2 is derived from 1 if carnots cycle is assumed to work between heat reservoirs with different temperatures .we believe that carnot s cycle works . then 2 holds .but we must be careful when we use the term _reversible process_. it has been believed that the process a b is reversible if b a is reversible . however, this is not always true .let b a be a partitioning process .then a b is a unifying process . but remember that the partitioning process is reversible while the unifying process is not .therefore the expression _reversible process_ in 2 must be replaced by _reversible process in both directions_. 3 for any process a b , the following inequality holds : where the equality corresponds to_ reversible process_. the proposition 3 works if a cycle made of a b and b a satisfies the clausius s inequality , where the process b a is a _reversible process in both directions_. this condition is not satisfied when a b is a partitioning process ( szilard s demon ) .thus it is necessary to replace the expression for any process a b by if a b does not include partitioning processes. furthermore , _reversible process_ should be replaced by _reversible process in both directions_. 4 in adiabatic systems entropy does not decrease , namely increases or stays constant .the proposition 4 is a result obtained by applying 3 to adiabatic systems .it is necessary to add a condition that if partitioning processes are not included.entropy is a controversial subject . herewe state our understanding .entropy is an objective physical quantity like energy or volume .its relation with information is expressed in eq .( 1 ) . to record a certain amount of information in a physical system, we have to _ reduce _ its entropy by .landauer r. , _ ibm j. res .dev . _ * 5 * , 183 ( 1961 ) .bennett c.h ., _ scientific american _ 88 ( nov . 1997 ) .feynman r.f . , _feynman lectures on computation _ , adison - wesley , 1997 , ch . 5 , pp . 137 .goto e. , yoshida n. , loe k. f. , and hioe w. , _ proc .foundations of quantum mechanics _ , pp .412 - 418 ( 1989 ) .
landauer discussed the minimum energy necessary for computation and stated that erasure of information is accompanied by _ heat _ generation to the amount of . modifying the above statement , we claim that erasure of information is accompanied by _ entropy _ generation . some new concepts will be introduced in the field of thermodynamics that are implicitly included in our statement . the new concepts that we will introduce are `` partitioned state '' , which corresponds to frozen state such as in ice , `` partitioning process '' and `` unifying process '' . developing our statement , i.e. , our thermodynamics of computation , we will point out that the so - called `` residual entropy '' does not exist in the partitioned state . we then argue that a partioning process is an entropy decreasing process . finally we reconsider the second law of thermodynamics especially when computational processes are involved .
the integration of agricultural commodity and financial markets has been largely criticized by both a growing body of economic literature ( see e.g. ) and political consideration . according to those criticsthe overflow of capital invested in the commodity markets feeds a destabilizing speculation , and a tightening of market regulation is then claimed . while the empirical evidence of international food price spikes and volatility driven by financial markets is robust ( see and references therein ) , its mechanisms and potential drawbacks for production risk management are unexplored .this paper analyzes the effects of market integration on conventional production planning and farm liquidity risk management policies within a stylized equilibrium model of production , trade and consumption of an agricultural commodity where financial and agricultural commodity markets are partially integrated .we suggest that the time to produce is the fundamental parameter governing risk allocation and production dynamics and should be considered in designing more efficient production schemes and distribution policies aimed at improving liquidity risk - sharing .the stylized model assumes that the production of an agricultural commodity is risky because farmers resources allocation and production decisions are irreversible and taken over a horizon , the `` time to produce '' . on the other hand financial investors trade over a much shorter horizon ( ) and possess an information advantage contingent on the exogenous shocks affecting production of the commodity .in our model , investments on contingent contracts play a role similar to investments on physical storage and the zero net supply condition is assumed to hold on average .actually , as discussed by , any hedging policy , by requiring margin capital and liquidity , will in general have adverse effects on liquidity needs .in particular the inefficiency of long - term industry s risk management are documented and discussed in , in , and in .motivated by the above remarks , in the model at hand it is assumed that long - term investors face a liquidity constraint : at the end of the production cycle the farm goes bankrupt and its production is lost if the profit resulting from selling the production at market price is not sufficient to break even .consequently , while price levels continue to depend on the joint allocation decisions of financial investors and producers , profits from financial hedging can not be used to soften the liquidity constraint faced by long - term investors . in practicethis constraint reproduces a segmentation of the capital market which is empirically well documented : most of the profits in commodity market trading remunerate professional investors while farm hedging investments are low . in the modelthe commodity equilibrium price is thus set by the clearing of the market in an economy populated by an heterogeneous set of producers exposed to idiosyncratic and systematic shock components , by consumers described by an exogenously specified demand curve and by an extra demand or supply contribution generated by financial agents .a comparative statics exercise proves that this model can reproduce many interesting stylized facts .first of all it is possible to prove that the long - term investors modify the production choice due to the effects of financial trading .in addition the financial investors make profit by exploiting their informational advantage .finally , the model shows a progressive increase in the volatility of the produced quantity and a growth in default risk with an increasing market integration .our model highlights that the major issue in farm risk management is the necessity to alleviate the effects of credit constraints to reduce price pressure on producers and consumers .correspondingly , public subsidization of agricultural investments can be interpreted as a necessary ( possibly inefficient ) response to restore the balance of capital flows from short - term to long - term investors .a rationalization for the farm policy actions similar to our has been discussed also in .our approach , however , is more focused on the description of farmers production planning decisions and our findings indicate that a rational production planning and liquidity risk management must classify different productions and financing opportunities considering the time to produceconstraint faced by different participants . the paper is organized as follows : in section [ mat ]we define the model and discuss its innovations with respect to the past literature . in section [ calc ] we present and discuss numerical results and their dependence on exogenous parameters .section [ res ] is devoted to a discussion about the assumptions made within the model , while section [ conc ] to conclusions .a simplified ( analytical solvable ) version of the model is provided in the appendix , together with some techinical aspects of the self consistent calculation procedure adopted .the model at hand is based on many simplifying assumptions in common with other rational expectations competitive storage models , , , and .the intrinsic information asymmetry introduced in the model ( discussed in section [ model ] ) can be seen as a reduced form of the long - standing hedging pressure theory of commodity prices that dates back to , and , more recently , to .market efficiency and its implications on commodity futures prices dynamics dates back to the seminal contribution of . in of perfect ( infinitely liquid and frictionless ) capital markets, farmers could borrow sufficient capital to invest in the production and simultaneously take a position in the contingent contract markets to hedge the production risk .the relation between liquidity constraints and productivity in farming and agricultural industries is grounded on a vast literature , see e.g. and references therein .the influence of biological production lags on agricultural commodity price dynamics has been investigated in literature by means of the well known cobweb model , for example in the case of u.s .beef markets ( see ) . a thorough analysis of the relation between credit and liquidity constraints , farm investment policies and optimal subsidization policies could be found in . in the equilibrium analysis described in it is shown that the presence of producers liquidity constraints induces mean reversion in futures prices , while government price subsidies , if actively hedged by producers , lowers futures risk premia and reduces price volatility . differently from that approach , here we explicitly model a multiplicity of producers which can default due to price dynamics . in this way we can analyze the equilibrium relations between liquidity restrictions , producers default and commodity prices . in this respect ,our model bears some similarities with the model discussed in where limits to arbitrage generate limits to hedging by producers .production , trade and consumption of a real good are explicitly modeled .the price is set by the equilibrium between the supply and demand when the market clears .in particular , the equilibrium price is determined by the agents interaction , their operational timeline and market clearing conditions .hereafter agents are idealized to the extent that farmers can only produce real goods and financial investors can only trade goods whose payoff is contingent to the production outcome of the real one .the two types of agents are considered separated from one another in order to study the corresponding sector s returns . however, a real agent could behave as a combination of the two idealized ones . as will be commented below considering a real agent ( for example a farmer that can both produce and trade )does not alter the results of the model .supply and demand are regulated by the interaction of three types of agents : farmers , consumers and financial investors . ** farmers * are the producers of the commodity goods in the economy .we consider an ensemble of farmers , all producing the same food commodity , which only are allowed to _ sell _ the produced goods .a single farmer produces a quantity of this commodity investing an amount of capital .we consider that each farmer incurs in a fixed cost which do not depend on the quantity produced .for the sake of simplicity we assume to be the same for all the farmers .+ the operational decision of the quantity to be produced is taken by the farmer at time .the quantity will be available on the market at a later time , corresponding to the `` time to produce '' , and sold at a price .thus , the profit of a farmer will be : + producers are assumed to be risk neutral and , for the sake of simplicity , a zero rate discounting is applied .extensions of the model to both fixed cost farmers dependency and to finite rate discounting are straightforward and will be discussed elsewhere . within our frameworkthe effect of individual risk aversion would be mitigated by the existence of a multitude of heterogeneous producers .+ if the actual profit reaped by the farmer is negative the farmer defaults and the amount produced is distributed among lenders and does not contribute to the total quantity of goods brought to the market at time . notice that in our model the liquidity constraint , that , as we will see is crucial in determining the production output volatility , is introduced by modeling the farmers default .the quantity is determined by the investment level , via a production function , that is generically assumed to be concave and with a positive derivative .we choose a simple form of : where is the fitness which induces uncertainty in the final amount specific to each farmer . models the exogenous stochastic uncertainty shocks and its realization at time is not predictable by farmers at the time of the production operational decision .however , its probability distribution ( described in section [ 2.2 ] ) is assumed to be known .+ the time interval represents the lag between the farmers operational investment and the market clearing epoch ( in short , the `` time to produce '' ) . +all the farmers enter the market at time in exactly the same conditions .each farmer sets an investment level at time , maximizing the expected profit ] the average value between all the farmers of a function of : =\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}\phi(\theta_{\tau}^{i})=\int_{-\infty}^{\infty}dx\phi(x)n_{x}(\theta_{\tau},\sigma^{2}\tau)\label{eq : idios_ave}\ ] ] where and is a normal distribution of average and variance we will also denote by ] .the value of entering in is estimated by eq .( [ eq : gamma ] ) as : \label{eq : ep}\ ] ] where is a function of determined by the marked clearing ( see below ) and we have assumed , under rational expectation , that =e^{a}[\theta_{\tau}] ] .in fact on a single trade the investor can reduce or increase the amount of production available for consumption , but this deviation has zero expectation under the farmer information set . following the same reasoning used to compute the farmer total return , we interpret this static condition as a weak form of the dynamic constraint that financial investors are zero net suppliers of the traded commodity .notice that by construction the position held by the financial investor in the long run will play a role similar to virtual storage with zero contents in average , with the possibility of both negative and positive inventory fluctuations .the financial investor may produce a capital gain , given in the last term of eq .( [ eq : profit_spec ] ) , by extracting the informational rent generated by the possibility to select an optimal strategy based on the observation of the realized level of fitness .since the feasible strategies are market neutral , i.e. on average the capital invested in the long and short position adds up to zero , the investment return per unit of dollar long is quantified by : where is determined by the average capital gain accumulated by the investor over different outcomes of the fitness : \label{eq : pl}\ ] ] and , abstracting from margin requirements to short sell the commodity , is the expected amount of capital required for the investment in the long positions : + \ ] ] where is given by eq .( [ eq : qs ] ) and is the average trading transaction cost .the selling price and the allocation of resources among agents are determined by the clearing of the market , namely by the total supply equaling the total demand , conditioned to the realization of the fitness uncertainty . for , in case of an adverse ( favorable ) realization , ( ) , the production supply ( consumer demand ) is augmented by the extra supply ( demand ) generated by the financial agents investing on contingent goods maturing at . where is the quantity of goods produced at time by farmers that have not defaulted and is given by eq .( [ eq : qf ] ) , and is the extra financial investors demand or supply defined in eq .( [ eq : qs ] ) . this equation must be solved iteratively together with eq .( [ eq : ep ] ) , since in this equation the selling price as a function of enters explicitly .the existence of a unique solution for the equilibrium price and the procedure for solving numerically eq .( [ eq : clearing ] ) together with ( [ eq : qf ] ) is discussed in the appendix .the equilibrium of the model is characterized by solving the market clearing in eq .( [ eq : clearing ] ) . in this mannerthe consumer prices , the equilibrium production ( given in eq .( [ eq : qf ] ) ) and the returns on investments and ( defined in eqs . ( [ eq : rs],[eq : rf ] ) ) , as well as other derivative quantities , can be determined for different values of market integration . the model s solution depends on the determination of the farmers expectation ( given in eq .( [ eq : ep ] ) ) , which is calculated using the price schedule as obtained by solving eq .( [ eq : clearing ] ) .the system of eq .( [ eq : ep ] ) and eq .( [ eq : clearing ] ) can not be solved analitically due to the presence of the liquidity constraint present in the lower extreme of the definite integral appearing in eq .( [ eq : frac ] ) .the system is then solved through the numerical self - consistent iterative procedure is described in the appendix .numerical results will of course depend on the choice of the scenario parameters , in particular , fixing the market elasticity , and determining fitness volatility .thus , a comparative statics of an analytic approximation of the model for the case of a single farmer has been used to facilitate the setting of the parameters values used in the complete model simulation .the analytic approximation of the model is also given in the appendix .the equilibrium solution discussed below has been found for the parameter set listed in table [ tab ] . in the followingwe also discuss the robustness of the results for different choices of the scenario parameters .figure [ fig : fig1 ] shows the price function solving eq .( [ eq : clearing ] ) , for the choice of the scenario parameters given in table [ tab ] , as a function of the difference , for ( representing the scenario with no markets integration ) and for a finite value of . as expected financial trading has the effect of stabilizing the equilibrium price by smoothing the price dependence on exogenous shocks .consistently with traditional perfect market models , the commodity price volatility is also reduced as an effect of trading ( see figure [ fig : price ] ) . according to the modelan increased trading efficiency will reduce price volatility in any analyzed scenario parameter .we argue that this is probably related to our choice of using statistically independent wiener processes to characterize the evolution of the farmers fitness appearing in eq .( [ evoltheta ] ) .expected speculator s and farmer s returns and are shown in figure [ fig:1 ] as a function of the market integration parameter . for values of market integration smaller than a critical value , the financial investor s expected return is negative , mainly due to the finite transaction cost considered in the model .we can interpret this result by observing that for market integration does not lead to an advantage for financial investors : i.e. for the financial transaction costs does not allow financial investors to extract their information rent .however , for larger values of financial integration the financial investors can exploit their superior information ( see figure [ fig:1 ] , top panel ) .we also observe that the variance of the financial investor return is almost constant in , consistently with the neutral hypothesis for financial investors strategies .notice that in the model it exists an which maximizes the expected returns for the financial investors .the financial investor s return depends on the choice of scenario parameters , whose explicit dependence has been found in the simplified analytic model of a single farmer ( described in the appendix ) . within the simplified model its dependence on the volatility of fitness and the market elasticity shown in figure [ fig : rs1 ] ( top ) .it can be observed that financial investors expected return is higher for highly risky ( high value of ) and inelastic markets ( low value of ) . the numerical solution for is also shown in figure [ fig : rs1 ] ( bottom ) .we observe that numerical and analytic solutions are qualitatively in agreement in reconstructing this observable .farmer s investment return is fairly constant and positive showing a slight maximum in correspondence to ( see bottom panel in figure [ fig:1 ] ) . in the long run farmer investment is productive , thus in the absence of liquidity constraints production is a profitable investment and , using land as a collateral would guarantee credit availability .the variance of the farmers return is low in the region where is not far from , while for larger values of it grows sharply .the variance is a good indicator of the level of risk faced by a farmer in producing the real commodity , and the model is able to reproduce the counterintuitive evidence that the financialization amplifies the variability of returns on commodity production .figure [ fig : returns ] quantifies the impact of the farmer s rational production planning on investment returns and by comparing the outcome of a rational farmer which sets the level of production taking into account the effect of market integration level , with the outcome of a naive farmers decision setting production level while ignoring the effect of market integration ( ) on the expectation value of ( see eq .( [ eq : ep ] ) ) . the financial investor s return does not change significantly , while the farmer s return , not surprisingly , shows a sharp reduction of return as an effect of a suboptimal production planning : naive farmers pay the maximum rent to financial investors which are better informed . within our model financialization of tradeis driven by the asymmetric impact of market integration .on the one hand the financial investor increases the investment opportunity and raises the expected return on investment , while on the other the liquidity constraint which limits the aggregate investment capacity of the farmer implies that their return volatility grows sharply with .in other terms the improved efficiency of production risk sharing exacerbates the effect of the liquidity constraints which worsen the financing conditions of real investment and raise the default rates .this is best understood by analyzing equilibrium production . as the degree of market integration increases , the production quantity ( shown in figure [ fig : quantity ] ) remains fairly constant , while its volatility increases with .the default rate in this market is approximately and almost constant with market integration , while the default rate variance increases with as a combined effect of the increase in the return volatility and the fitness fluctuation , .note that the longer the time to produce the larger the fluctuations in returns from real investment , thus an efficient subsidization approach should consider a redistribution of wealth which accounts for the differential levels of liquidity stress suffered by producers , intermediaries and last minute - traders with differential time to produce . to better identify the relation between farmers default and production volatility , in figure [ fig : varq ] we compare two different market conditions corresponding to different levels of exogenous parameters ( details reported in the figure caption ) : one characterized by a large default rate , the other characterized by a smaller , in which subsidization effects are simulated by artificially raising the level of consumers demand ( which is equivalent to a softening of the liquidity constraint ) . as can be seen the financialization effect , i.e. the relative increase in production volatility with increasing market integration is much larger in the market with higher default rates .notice that within our model the main channel of propagation of instability induced by financialization is the lack of an efficient hedging channel able to transfer the liquidity risk .thus , the most dangerous fluctuations are not price fluctuations but rather those induced by fluctuations in default rates .a thorough analysis of the policy implications under similar assumptions is discussed in , where it is suggested that subsidies are determined by the necessity to reduce default rates to improve welfare . within our equilibrium solutionall those conclusions continue to hold .it is however important to point out an important distinction : in our model defaults are driven by liquidity rather than credit risk .this is consistent with the well known observation that in most of case subsides are given to land owners which are well collateralized and have a reduced exposure to credit risk . in our model farmersare not allowed to invest in the financial market and financial investors do not invest in real production .it is important to notice , however , that this restriction does not impose any limit in the diversification opportunities of liquidity risk affecting production .the farmer can reduce the default probability only by reducing the amount of capital invested in the production of the real commodity .however , this farmer decision will not have any consequence on aggregate sector quantities .assume that a representative investor is allowed to invest both in real and financial activities by maximizing the risk return trade - off on those activities .the asymmetric impact of market integration on the returns of real production and financial trading would imply that the optimal allocation in financial investment would increase with and the investment in production would be depleted . changing the identity of the investors would not modify the clearing equation ( [ eq : clearing ] ) .for this reason the global relations of quantity , price and farmers survival fraction ( and their volatility ) would remain unaffected by any farmer hedging policy .the key condition is that the liquidity constraint applies only to real investment and generates capital market segmentation which is not mitigated by financial hedging . despite our modeldoes not address the welfare implications of the market imperfections , it suggests that an improvement in the liquidity risk management should be one of the main goals of optimal market regulations , taxation schemes and public subsidization programs and correspondingly that such policies should enhance liquidity risk sharing between producers , intermediaries and traders investing in commodity markets .stated that the longer is the investment horizon , in presence of large fitness uncertainty , the larger is the liquidity risk , optimal intervention schemes could be introduced to classify different agents according to their time to produce .in this paper we have shown that , at least in principle , in commodity markets dominated by uncertainty and characterized by a time to produce , at a certain critical level of market integration a financial trader operating with a deterministic strategy can take advantage of his on time action to realize a profit . upon the introduction of a diffusive market stochastic fitness ,we demonstrate that this is the consequence of a competitive equilibrium generated by the asynchronous decisions of the two agents participating in market clearing : producers are forced by their production cycle to anticipate market decisions when programming their production strategy , while financial investors can instantaneously react to current prices .in fact , the long production cycle operates as a credit and liquidity risk multiplier on farmers , which , along with liquidity constraints on production , makes the long - term ( production ) investments much riskier than short - term ( financial ) investments : due to the presence of liquidity constraints on long - term producers , the integrated spot commodity and financial market is inefficient in sharing the risk between market participants .the model shows that more _elastic _ markets tolerate _ larger _ values of market integration . on the contrary negative effects of market financialization are more pronounced on those markets characterized by long production cycles and large market fitness volatility . among negative effectswe point out that the combined effect of liquidity constraints and market financialization amplifies producers risk and production volatility .this fact may result in markets which are unprotected by price turbulence if investors move suddenly elsewhere their financial activity towards different investment targets .the model presented here can be developed and improved along many directions : first the above model implications can be tested on real data and an empirical analysis would certainly improve our understanding of market imperfections , second our analysis does not take into account the relevant sustainability issues which are generated by the complex interaction between commodity production , food quality health programs and standards of living of consumers . as for the first issuewe note that if spot commodity and financial market integration is proxied by the relative size of short and long - term allocations , our model predicts that for a specific commodity this quantity should positively correlate with production volatility of that commodity .the model solution discussed in section [ mat ] depends on the determination of the farmers expectation ] solving eq .( [ eq : ep ] ) reduces to : developing the probability distribution of in cumulants , eq .( [ eq : gamma ] ) can be solved with respect to and we obtain the single solution : ^{\frac{\beta}{\beta+1}}\label{eq : gamma1}\ ] ] where .o. orhangazi , financialisation and capital accumulation in the non - financial corporate sector : a theoretical and empirical investigation on the us economy : , cambridge journal of economics 32 ( 6 ) ( 2008 ) 863886 .g. tadesse , b. algieri , m. kalkuhl , j. von braun , http://dx.doi.org/10.1016/j.foodpol.2013.08.014[drivers and triggers of international food price spikes and volatility ] , food policy 47 ( 2014 ) 117128 .j. e. parsons , a. s. mello , rising food prices : what hedging can and can not do , blog : betting the business financial risk management for non - financial corporations .url : http://bettingthebusiness.com/ 2011/06/27/rising - food - prices - what - hedging - can - and - cannot - do/ ( 2011 ) [ cited 2013 ] . h. d. leathers , j .-chavas , http://ajae.oxfordjournals.org/content/68/4/828.abstract[farm debt , default , and foreclosure : an economic rationale for policy action ] , american journal of agricultural economics 68 ( 4 ) ( 1986 ) 828837 .http://arxiv.org / abs / http://ajae.oxfordjournals.org/ content/68/4/828.full.p% df+html [ ] , http://dx.doi.org/10.2307/1242129 [ ] .b. c. briggeman , c. a. towe , m. j. morehart , http://ideas.repec.org/a/oup/ajagec/v91y2009i1p275-289.html[credit constraints : their existence , determinants , and implications for u.s .farm and nonfarm sole proprietorships ] , american journal of agricultural economics 91 ( 1 ) ( 2009 ) 275289 .http://ideas.repec.org / a / oup/ ajagec / v91y2009i1p275 - 289.% html[http://ideas.repec.org / a / oup / ajagec / v91y2009i1p275 - 289.% html ] j. vercammen , http://econpapers.repec.org / repec : oup : erevae : v:34:y:2007:i:4:p:479 - 500% [ farm bankruptcy risk as a link between direct payments and agricultural investment ] , european review of agricultural economics 34 ( 4 ) ( 2007 ) 479500 ..the chosen set of external parameters used in the model .they have been set in order to satisfy the following requirements : the equilibrium existence condition ( eq . ( [ eq : existcond ] ) ) , the existence of a range of values for the market integration for which the financial investor s expected return is positive , and the requirement that the effect on the system induced by the presence of financial investors is not negligible .there exist different sets of parameters satisfying those requirements , however , as discussed in section 3 the results are robust for different choices of the external parameters .the value of is in units of the optimal investment level set in eq .( [ eq : m_best ] ) . [ cols="^,^,^,^,^,^,^",options="header " , ] ) , as a function of the difference for ( no financial and commodity markets integration ) and for a finite value of ( ) .the set of parameters that have been used is provided in table [ tab ] . ] as obtained by solving eq .( [ eq : clearing ] ) is shown as a function of .prices are averaged over different realizations of with distribution function ( see section [ 2.2 ] ) .the supplied quantity appearing in eq .( [ eq : clearing ] ) is averaged over not defaulted farmers ( see eq .( [ eq : qf ] ) ) .the error bars correspond to the variation . ]( eq . ( [ eq : rs ] ) ) of the speculator and of the farmers ( eq .( [ eq : rf ] ) ) are shown as a function of . is averaged over different realizations of with distribution function ( see section [ 2.2 ] ) , while is also averaged over the farmers population .the error bars correspond to the variation . ]evaluated at the `` optimum '' is shown as a function of the externalities dispersion , and three different values of the market elasticity : ( dot dashed line ) , ( dashed line ) and ( solid line ) . on the top : results from the analytic calculation shown in appendix a. at the bottom : average results from the numerical calculation .averages of are calculated as specified in the caption of figure [ fig:1 ] ., title="fig : " ] + evaluated at the `` optimum '' is shown as a function of the externalities dispersion , and three different values of the market elasticity : ( dot dashed line ) , ( dashed line ) and ( solid line ) . on the top : results from the analytic calculation shown in appendix a. at the bottom : average results from the numerical calculation. averages of are calculated as specified in the caption of figure [ fig:1 ] ., title="fig : " ] ( eq . ( [ eq : rs ] ) ) of the speculator and of the farmers ( eq . ( [ eq : rf ] ) )are shown as a function of .results for the rational and naive hypothesis for the farmers are compared .averages are calculated as specified in the caption of figure [ fig:1 ] . ]given in eq .( [ eq : qf ] ) and the fraction of farmers that default given in eq .( [ eq : frac ] ) are shown as a function of .the fraction and the supplied quantity are averaged over different realizations of with distribution function ( see section [ 2.2 ] ) , and over not defaulted farmers ( see eq .( [ eq : qf ] ) ) .the error bars correspond to the variation . ] and a higher consumers level of demand ( increased ) .this market results in a significantly smaller farmer defaults fraction ( , to be compared to of defaults in market a , see section 3.3 ) . ]
we propose a stylized model of production and exchange in which long - term investors set their production decision over a horizon , the `` time to produce '' , and are liquidity constrained , while financial investors trade over a much shorter horizon ( ) and are therefore more duly informed on the exogenous shocks affecting the production output . the equilibrium solution proves that : producers modify their production decisions to anticipate the impact of short - term investors allocations on prices ; ( ii ) short - term investments return a positive expected profit commensurate to the informational advantage . while the presence of financial investors improves the efficiency of risk allocation in the short - term and reduces price volatility , the model shows that the aggregate effect of commodity market financialization results in rising the volatility of both farms default risk and production output .
most speculative markets at national and international level share a number of stylized facts , like volatility clustering and fat tails of returns , for which a satisfactory explanation is still lacking in standard theories of financial markets .such stylized facts are now almost universally accepted among economists and physicists and it is now clear that financial markets dynamics give rise to some kind of universal scaling laws . showing similarities with scaling laws for other systems with many interacting particles , a description of financial markets as multi - agent interacting systems appeared to be a natural consequence .this topic was pursued by quite a number of contributions appearing in both the physics and economics literature in recent years .this new research field borrows several methods and tools from classical statistical mechanics , where emerging complex behavior arises from relatively simple rules due to the interaction of a large number of components .starting from the microscopic dynamics , kinetic models can be derived with the tools of classical kinetic theory of fluids .in contrast with microscopic dynamics , where behavior often can be studied only empirically through computer simulations , kinetic models based on pdes allow us to derive analytically general information on the model and its asymptotic behavior . in this paperwe introduce a simple boltzmann - like model for a speculative market characterized by a single stock and a socio - economical interplay between two different types of traders , chartists and fundamentalists .the model is strictly related to the microscopic lux - marchesi model and to kinetic models of opinion formation recently introduced in .in addition , we take into account some psychological and behavioral components of the agents , like the way they interact each other and perceive the risk , which may produce non rational behaviors .this is done by means of a suitable `` value function '' in agreement with the prospect theory by kahneman and tversky .as we will show people systematically overreacting produces substantial instabilities in the stock market . in an earlier paper a similar approachhas been used considering a single population of investors interacting in the stock market on the basis of the microscopic levy - levy - solomon model .the emergence of a lognormal behavior for the wealth distribution of the agents has been shown . though the theoretical set - up of the analysis is close in certain respects to that of , the structure of the model is rather different .namely , the description of individual behavior follows an opinion formation dynamic strictly connected with the price trend . in this way, the heterogeneity among agents as well as their social interactions will be taken into account which both are key elements affecting the outcome of the overall market dynamics . following the analysis developed in , we shall prove that the boltzmann model converges in a suitable asymptotic limit towards convection - diffusion equations of fokker - planck type .other fokker - planck equations were obtained using different approaches in .this permits to study the asymptotic behavior of the investments and the price distributions and to characterize the regimes of lognormal behavior and the ones with power law tails .the main finding of the present paper is that the presence of heterogeneous strategies , both fundamentalists and chartists , is essential to achieve basic stylized fact like the presence of fat tails .the rest of the paper is organized as follows . in section 2we introduce the boltzmann kinetic model for the interacting chartists and the price evolution .details of the strategy exchange between chartists and fundamentalists are also presented here .a characterization of the admissible equilibrium states of the resulting system is then reported .next , in section 3 , with the aim to study the asymptotic behavior of the chartists and price distributions , we introduce simpler fokker - planck approximations of the boltzmann system and give explicit expressions of the long time behavior .the mathematical details of the derivation of such fokker - planck models are reported in separate appendices at the end of the manuscript .numerical results which confirm the theoretical analysis are given in section 4 and some concluding remarks are discussed in the last section .we describe a simple financial market characterized by a single stock or good and an interplay between two different traders populations , chartists and fundamentalists , which determine the price dynamic of such stock ( good ) .the aim is to introduce a kinetic description both for the behavior of the microscopic agents and for the price , and then to exploit the tools given by kinetic theory to get more insight about the way the microscopic dynamic of each trading agent can influence the evolution of the price , and be responsible of the appearance of stylized fact like fat tails and lognormal behavior . similarly to lux and marchesi model ,the starting point is a population of two different kind of traders , chartists and fundamentalists .chartists are characterized by their number density and the investment propensity ( or opinion index ) of a single agent whereas fundamentalists appear only through their number density .the value is invariant in time so that the total number of agents remains constant . in the sequelwe will assume for simplicity .[ [ dynamic - of - investment - propensity - among - chartists . ] ] dynamic of investment propensity among chartists .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let us define , ] and ] is taken symmetric on the interval , and characterize the herding behavior , whereas defines the diffusive behavior , and will be also taken symmetric on .simple examples of herding function and diffusion function are given by with , , ( see figure [ fig1 ] ) .other choices are of course possible , note that in order to preserve the bounds for it is essential that vanishes in .both functions take into account that extremal positions suffer less herding and fluctuations . for , is constant and no herding effect is present and the mean investment propensity is preserved when the market influence is neglected ( ) as in classical opinion models a model ( see at the reference therein ) .( left ) and diffusion function ( right).,title="fig : " ] ( left ) and diffusion function ( right).,title="fig : " ] [ fig1 ] a remarkable feature of the above relations is the presence of the normalized value function in ] , we have to chose such that which gives analogously we can ensure , thus it is enough to take .\ ] ] for this reason , in the rest of the paper , we will consider only kernel of `` maxwellian type '' [ [ strategy - exchange - chartists - fundamentalists . ] ] strategy exchange chartists - fundamentalists .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in addition to the change of investment propensity due to a balance between herding behavior and the price followers nature of chartists , the model includes the possibility that an agent changes its strategy from chartist to fundamentalists and viceversa .agents meet individual from the other group , compare excess profits from both strategies and with a probability depending on the pay - off differential switch to the more successful strategy .when a chartist and a fundamentalist meet they characterize the success of a given strategy trough the profits earned by comparing here ] , is the reference point and .for example we choose and . in the first test we consider the case with i.e only chartists are present in the model .we computed the equilibrium distribution for of the investment propensity .we take , , a constant herding function and the coefficients .the initial data for the chartists is perfectly symmetric with , so the price remains constant with .a particular care is required in the simulation to keep since the equilibrium point is unstable and as soon as the results deviate towards a market boom or crash .after iteration the solution for the investment propensity has reached a stationary state and is plotted together with the solution of the fokker - planck limit in figure [ ch4:mptest1a ] . in the same figurewe report also the computed solution for the price distribution and the self - similar lognormal solution of the corresponding fokker - planck equation . a very good agreement between the computed boltzmann solution and the fokker - planck solution is observed .( left ) and log - normal distribution for the price ( right ) at .the continuous line is the solution of the corresponding fokker - planck equation.,title="fig : " ] ( left ) and log - normal distribution for the price ( right ) at .the continuous line is the solution of the corresponding fokker - planck equation.,title="fig : " ] .figure on the right is in log - log scale .the continuous line is the solution of the corresponding fokker - planck equation.,title="fig : " ] .figure on the right is in log - log scale .the continuous line is the solution of the corresponding fokker - planck equation.,title="fig : " ] in the second test case we considered the most interesting situation with the presence of fundamentalists , i.e both chartists and fundamentalists interact in the stock market .we compute an equilibrium situation where and the price stationary at the fundamental value .we take , , , .we report the result of the simulation for the price distribution at the stationary state . in figure [ ch4:mptest2 ]we show the price distribution together with the steady state of the corresponding fokker - planck equation .the emergence of a power law is clear also for the boltzmann model , and deviations of the two models is observed for small values of the price . in the third testwe consider the case with strategy exchange between the two populations of interacting agents .the switching rate used to run the simulation has the following form where represent the inertia of the reaction to profit differentials .we start the simulation considering .the fundamental price is , we take , , , , , , , and .furthermore we consider an herding function of the form .we run different simulations for iterations , with different values of , and , which measures respectively the herding and the market influence on the chartists .three fundamental behaviors can be observed .the predominance of chartists , which leads the market towards a crash or a boom ( see figure [ ch4:mptest1 ] ) , the predominance of fundamentalists , which originates damped oscillation of the price towards the fundamental value ( see figure [ ch4:mptest2a ] ) , and a balanced behavior , characterized by periods with oscillation of the price around the fundamental value ( see figures [ ch4:mptest3a ] and [ ch4:mptest3c ] ) . from the simulations it is observed that , if we start with a balanced population between chartists and fundamentalists , the parameter , which characterize the influence of the price trend on the chartists investment propensity , plays a determinant role in the competition between the two different trading strategies . in particular when fundamentalists are predominant and price oscillations become dumped . and .figure on the left represent the price averaged over samples .figure on the right represent the variation of the chartists s fraction among the entire population of agents.,title="fig : " ] and .figure on the left represent the price averaged over samples .figure on the right represent the variation of the chartists s fraction among the entire population of agents.,title="fig : " ] and .figure on the left represent the price averaged over samples .figure on the right represent the variation of the chartists s fraction among the entire population of agents.,title="fig : " ] and .figure on the left represent the price averaged over samples .figure on the right represent the variation of the chartists s fraction among the entire population of agents.,title="fig : " ] samples .the chartist dynamic is characterized by the parameters and .figure on the right represent the variation of the chartists s fraction among the entire population of agents.,title="fig : " ] samples .the chartist dynamic is characterized by the parameters and .figure on the right represent the variation of the chartists s fraction among the entire population of agents.,title="fig : " ] but computing the price averaging over samples.,title="fig : " ] but computing the price averaging over samples.,title="fig : " ]we derived an interacting agents kinetic model for a simple stock market characterized by two different market strategies , chartists and fundamentalists .the kinetic system couples a description for the propensity to invest of chartists and the price formation mechanism .the model is able to describe several market phenomena like the presence of of booms , crashes , and cyclic oscillations of the market price .the equilibrium behavior has been studied in a suitable asymptotic regime which originates a system of fokker - planck equation for the chartist s opinion dynamics and the price formation .we found that in a system of agents acting only using a chartist strategy the distribution of price converges towards a lognormal distribution .this is in good agreement with what previously found in and observed in .when a second strategy based on a fundamentalist approach is introduced in the model the prices distribution displays pareto power law tails , which is in accordance to what observed in the real market data . in the description of the chartists behaviorwe also introduced a value function which takes into account the effect of some psychological factors in the opinion formation dynamic .the main effect is to introduce market instabilities and to reduce the number of stable equilibrium configurations of the system .let us finally conclude by observing that in principle several generalizations are possible .we mention here the possibility to include multiple interacting strategies and/or the influence of the wealth as an independent variable in the market dynamics .we report in this appendix the details of the derivation of the fokker - planck equation ( [ 8a ] ) for the distribution of chartists .following first we recall the definition of weak solution for kinetic equations of the form ( [ boltzmanchart ] ) and ( [ ch4:eq : price ] ) .let $ ] and be the space of all borel measure of finite -th order momentum , equipped with the topology of weak convergence of the measures .let be the class of all real functions on such that and is hlder continuous of order + h^(m)_=_y_1y_2 < [ eq : hol ] where , and denotes the -th derivative of .let with an initial probability density , a weak solution for ( [ boltzmanchart ] ) is any probability density satisfying for and all , and such that the scaled density defined in ( [ eq : sc1 ] ) satisfies the equation in weak form where is a suitable symmetric support for the random variable which avoids the dependence of the kernel on the variables and .given let us take .+ from the microscopic dynamic of chartists we have in the asymptotic limit , , we have and we can use the taylor expansion where , for some inserting this expansion in the weak formulation of the boltzman equation , we get \tilde{f}(y)\tilde{f}(y_{*})d\eta d\eta_{*}dy_{*}dy \\ & + & r(\xi,\sigma)\end{aligned}\ ] ] where \label{resto } \\\nonumber & \cdot&(\phi''(\tilde{y})-\phi''(y))\tilde{f}(y)\tilde{f}(y_{*})d\eta_{*}d\eta dy_{*}dy.\end{aligned}\ ] ] in order to prove that the remainder ( [ resto ] ) goes to zero as we start observing that , being , and we get hence using the fact that , , and applying the following simple inequality with a suitable positive constant , we finally obtain to simplify computations , we assume that , with zero mean and variance , is the density of , where is a random variable with zero mean and unit variance , that belongs to , for , so we have and is bounded .this is enough to show that in the asymptotic limit defined by ( [ eq : sc2 ] ) the quantity tends to zero .+ finally taking the limit in the weak formulation yields \tilde{f}(y)\tilde{f}(y_{*})d\eta d\eta_{*}dy_{*}dy \\ & = & \int_{i}\left[-\left(\rho_{c}\tilde{\alpha_{1 } } h(y)(y - y ) + \rho_{c}\tilde{\alpha_{2}}(\tilde\phi - y)\right)\phi'(y ) + \frac{\lambda}{2}(\rho_{c}{d}^{2}(y))\phi''(y)\right]\tilde{f}(y)dy,\end{aligned}\ ] ] which is nothing but the weak form of the fokker - planck equation ( [ 8a ] ) .we can then state the following theorem let the probability density , and let the symmetric density be in with .then in the asymptotic limit defined by ( [ eq : sc2 ] ) the weak solution to the boltzmann equation ( [ eq : bw ] ) for the scaled density converges , up to extraction of a subsequence , to the weak solution of the fokker - planck equation ( [ 8a ] ) .in this appendix we derive the fokker - planck limit ( [ 8b ] ) for the scaled density distribution of the price .now let be the class of all real functions on such that and is hlder continuous of order .we have the following given an initial price distribution with a weak solution to ( [ ch4:eq : price ] ) is any probability density satisfying for and all and such that again we start with the weak formulation which now reads where is a suitable symmetric support for the random variable which avoids the dependence of the kernel on the variable .let us take with . using a taylor expansion of around where for some and substituting into ( [ weakprezzo ] ) we have \tilde{v}(s ) d\eta ds \\& + & r(\beta,\zeta,\xi)\end{aligned}\ ] ] where analogously as before , in order to perform the asymptotic limit we need to show that the quantity approaches zero as .we observe that being and we have hence next we observe that \label{eq : st1 } \\\nonumber & & c_{2+\delta}\left((\beta\rho_{c}t_{c})^{2+\delta } + ( \beta\rho_{f}\gamma)^{2+\delta}\left(\frac{s_{f}^{2+\delta}+s^{2+\delta}}{s^{2+\delta}}\right ) + |\eta|^{2+\delta}\right),\end{aligned}\ ] ] where is a suitable constant . as in appendixa we assume that , with zero mean and variance is the density of , where is a random variable with zero mean and unit variance , that belongs to , for , so we have and is bounded .then we obtain \right.\\ & \cdot&\left.\int_{\r^+}s^{2+\delta}\tilde{v}(s)ds+ ( \beta\rho_{f}\gamma)^{2+\delta}s_{f}^{2+\delta}\right\}.\end{aligned}\ ] ] from this inequality it follows that tends to zero in the limit ( [ eq : sc3 ] ) if is bounded at any fixed time , provided that the same bound holds at time . to show this we start again from the weak formulation ( [ weakprezzo ] ) .the choice gives now where for some recalling the microscopic dynamic for the evolution of the price variable we can write d\eta ds \\ & = & \displaystyle \frac{p}{\xi}\int_{\r^+}\int_{k}\psi(\eta)\tilde v(s ) s^{p-1}\left[\left(\beta(\rho_{c}t_{c}y s + \rho_{f}\gamma(s_{f}-s)\right ) + \eta s\right ] d\eta ds \\& + & \displaystyle \frac{p(p-1)}{2\xi}\int_{\r^+}\int_{k}\psi(\eta)\tilde v(s)\tilde{s}^{p-2 } \left[\beta \left(\rho_{c}t_{c}y s + \rho_{f}\gamma{(s_{f}-s)}\right)+ \eta s\right]^{2 } d\eta ds.\end{aligned}\ ] ] since the random variable has zero mean value , the first term in the last expression reduces to .\ ] ] for the second therm , we know that ,\end{aligned}\ ] ] which implies ^{p-2},\end{aligned}\ ] ] with a suitable constant .gathering all this the weak formulation gives \\ \displaystyle & + & \frac{p(p-1)}{2\xi}{\bar c_{p}}\int_{\r^+}\int_{k}\psi(\eta)\tilde v(s ) s^p \left[\beta \left(\rho_{c}t_{c}y + \rho_{f}\gamma\frac{(s_{f}-s)}{s}\right)+ \eta\right]^{2}\\ & \cdot & \left[(\beta\rho_{c}t_{c})^{p-2 } + ( \beta\rho_{f}\gamma)^{p-2}\left(\frac{s_f^{p-2}+s^{p-2}}{s^{p-2}}\right ) + |\eta|^{p-2 } + 1\right]d\eta ds.\end{aligned}\ ] ] now if we consider the asymptotic limit ( [ eq : sc3 ] ) and recall ( [ etamoment ] ) for the high order moments of , it follows that the -moments of are bounded at any finite time independently of and for satisfy where and . + coming back to the asymptotic expansion we can finally perform the limit \tilde{v}(s ) d\eta ds \\ &= & \int_{\r^+}\left [ \tilde{\beta}(\rho_{c}(t)yt_{c}s\rho_f\gamma(s_f -s))\phi'(s ) + \frac{\nu}{2}s^{2}\phi''(s)\right]\tilde{v}(s)ds,\end{aligned}\ ] ] which is the weak form of the fokker - planck equation for the price ( [ 8b ] ) .so we proved the following [ fokplankprice ] let the probability density .then in the limit defined by ( [ eq : sc3 ] ) the weak solution to the boltzmann equation ( [ weakprezzo ] ) for the scaled density converges , up to extraction of a subsequence , to a weak solution of .lux , t. , the socio - economic dynamics of speculative markets : interacting agents , chaos , and the fat tails of return distributions , _ journal of economic behavior & organization _ vol . 33 , ( 1998 ) , 143165 .solomon , s. , stochastic lotka - volterra systems of competing auto - catalytic agents lead generically to truncated pareto power wealth distribution , truncated levy distribution of market returns , clustered volatility , booms and crashes , _ computational finance 97 _ , eds .n. refenes , a.n .burgess , j.e .moody ( kluwer academic publishers 1998 ) .
in this paper we introduce a simple model for a financial market characterized by a single stock or good and an interplay between two different traders populations , chartists and fundamentalists , which determine the price dynamic of the stock . the model has been inspired by the microscopic lux - marchesi model . the introduction of kinetic equations permits to study the asymptotic behavior of the investments and the price distributions and to characterize the regimes of lognormal behavior and the formation of power law tails . * keywords : * kinetic models , opinion formation , stock market , power laws , behavioral finance
the minkowski ( ) metric is inarguably one of the most commonly used quantitative distance ( dissimilarity ) measures in scientific and engineering applications .the minkowski distance between two vectors and in the -dimensional euclidean space , , is given by three special cases of the metric are of particular interest , namely , ( city - block metric ) , ( euclidean metric ) , and ( chessboard metric ) . given the general form ( [ equ_lp ] ) , and can be defined in a straightforward fashion , while is defined as the minkowski metric enjoys the property of being translation invariant , i.e. , for all . since in many applicationsthe data space is euclidean , the most natural choice of metric is , which has the added advantage being isotropic ( rotation invariant ) .for example , when the input vectors stem from an isotropic vector field , e.g. , a velocity field , the most appropriate choice is to use the metric so that all vectors are processed in the same way , regardless of their orientation .however , has the drawback of a high computational cost due to the multiplication and square root operations . as a result , and often used as alternatives .although these metrics are computationally more efficient , they deviate from significantly . due to the translation invariance of , it suffices to consider , i.e. , the distance from the point to the origin .therefore , in the rest of the paper , we will consider approximations to rather than .let , defined on , be an approximation to ( euclidean norm ) .we assume that is a continuous and absolutely homogeneous function .recall that is called absolutely homogeneous ( of degree one ) if we note that all variants of we consider in this paper satisfy these assumptions . as a measure of the quality of the approximation of to wedefine the maximum relative error ( mre ) as using the homogeneity of and , can be written as where is the unit hypersphere of with respect to the euclidean norm .furthermore , by the continuity of , we can replace the supremum with maximum in and write we will use as the definition of mre throughout .mukherjee recently introduced a class of distance functions called weighted -cost distances that generalize -neighbor , octagonal , and -cost distances .he proved that weighted -cost distances form a family of metrics and derived an approximation for the euclidean norm in . herewe briefly review the -cost norm .the -cost norm defines two points in the rectangular grid as neighbors when their respective hypercubes ( or hypervoxels ) share a hyperplane of any dimension .the cost associated with these points can be at most , , such that if two consecutive points on a shortest path share a hyperplane of dimension , the distance between them is taken as .there are distinct -cost norms defined by where is the -th absolute largest component of ,i.e. , is a permutation of such that .the mre of this norm is given by mukherjee generalized the -cost norm as follows : where s are non - negative real constants .based on this weighted norm , he then derived an approximation for using the following weight assignment : for .note that consistently underestimates and the corresponding mre is given by in a recent study , we examined various euclidean norm approximations in detail and compared their average and maximum errors using numerical simulations . herewe show that two of those approximations , namely barni _ et al ._ s norm and seol and cheun s norm , are viable alternatives to .barni _ et al . _ formulated a generic approximation for as where and are approximation parameters .note that a non - increasing ordering and strict positivity of the component weights , i.e. , is a necessary and sufficient condition for to define a norm .barni _ et al ._ showed that the minimization of ( [ equ_max_err ] ) is equivalent to determining the weight vector and the scale factor that solve the following minimax problem : where .the optimal solution and its mre are given by note the striking similarity between and .interestingly , a similar but less rigorous approach had been published earlier by ohashi .it should also be noted that several authors approached the problem from a euclidean distance transform perspective and derived similar approximations for the - and -dimensional cases , see for example and .furthermore , computation of weighted ( chamfer ) distances in arbitrary dimensions on general point lattices is discussed in .more recently , seol and cheun proposed an approximation of the form where and are strictly positive parameters to be determined by solving the following linear system where is the expectation operator .seol and cheun estimated the optimal values of and using -dimensional vectors whose components are independent and identically distributed , standard gaussian random variables . in , we demonstrated that a fixed number of samples from the unit hypersphere gives biased estimates for the mre .the basic reason behind this is the fact that a fixed number of samples fail to suffice as the dimension of the space increases .it is easy to see that and fit into the general form which is a weighted norm .for the weights are and , whereas for they are and .clearly , has a more elaborate design in which each component is assigned a weight proportional to its ranking ( absolute value ) .however , this weighting scheme also presents a drawback in that a full ordering of the component absolute values is required . and can also be written as linear combinations of the and norms , as in . overestimates the norm , whereas underestimates it .therefore , it is natural to expect a suitable linear combination of and to give an approximation to better than either of them .note that rosenfeld and pfaltz obtained a -dimensional approximation by combining and nonlinearly as follows : .due to their formulations , the mres for and can be calculated analytically using and , respectively . in figure [ fig_max_err ]we plot the theoretical errors for these norms for .it can be seen that is not only more accurate than , but also it scales significantly better .maximum relative errors for and ] the operation counts for each norm are given in table [ tab_cost ] ( * abs * : absolute value , * comp * : comparison , * add * : addition , * mult * : multiplication , * sqrt * : square root ) .the following conclusions can be drawn : * and have the highest computational cost due to the fact that they require sorting of the absolute values of the vector components .* has the lowest computational cost among the approximate norms .a significant advantage of this norm is that it requires only two multiplications regardless of the value of .* can be used to approximate ( squared euclidean norm ) using an extra multiplication . on the other hand ,the computational cost of ( ) is higher than that of due to the extra absolute value and sorting operations involved . .[ tab_cost ] operation counts for the norms [ cols="^,^,^,^,^,^",options="header " , ] in table [ tab_avg_max_err ] we display the percentage average and maximum errors for , , and for . average relative error ( are ) is defined as where is a finite subset of the unit hypersphere , and denotes the number of elements in . an efficient way to pick a random point on to generate independent gaussian random variables with zero mean and unit variance .the distribution of the unit vectors will then be uniform over the surface of the hypersphere .for each approximate norm , the are and mre values were calculated over an increasing number of points , ( that are uniformly distributed on the hypersphere ) until the error values converge , i.e. , the error values do not differ by more than in two consecutive iterations . in table[ tab_avg_max_err ] , the error values under the column " were obtained using the aforementioned iterative scheme , whereas those under the column " are taken from . motivated by the fact that consistently underestimates , we also experimented with a normalized form of this approximate norm given by .note that for .note that for and , two types of maximum error were considered : empirical maximum error ( ) , which is calculated numerically over and the theoretical maximum error ( ) , which is calculated analytically using and , respectively . by examining table[ tab_avg_max_err ] , the following observations can be made regarding the maximum error : * the most accurate approximation is .this is because this norm is designed to minimize the maximum error . *the proposed normalization is quite effective since the resulting norm , , is , on the average , only % less accurate than , whereas both and are , on the average , about % less accurate than . *the least accurate approximations are and for and , respectively . * as is increased , the error increases in all approximations .however , as can also be seen in fig .[ fig_max_err ] , the error grows faster in some approximations than others . * for ,the empirical and theoretical errors agree almost perfectly in all cases , which demonstrates the validity of the presented iterative error calculation scheme .as for , the agreement in each case is close , but not as close as that observed in .we have confirmed that using a smaller convergence threshold ( ) alleviates this problem at the expense of increased computational cost .on the other hand , with respect to average error we can see that : * is the most accurate approximation .this is because this norm is designed to minimize the average error .* and are the least accurate approximations .furthermore , the errors given by mukherjee are lower than those that we obtained ( over ) , and the discrepancy between the outcomes of the two error calculation schemes increases as is increased .the optimistic average error values given by mukherjee are due to the fact that his approximation was primarily intended for use in digital geometry and hence the calculations were performed in ( rather than ) using a very small number of points ranging from to .in fact , mukherjee used progressively fewer points with increasing to calculate the error values . in , we demonstrated that more points are required in higher dimensions to obtain unbiased error estimates . in the calculation of , we assumed that the optimal scaling factor for is the same as that of , i.e. , . in order to check this assumption, we performed a one - dimensional grid search over $ ] for each value .the results are shown in table [ tab_avg_max_err_grid ] .it can be seen that : * is significantly more accurate than with respect to both are and mre . * and have almost identical mres .since is analytically optimized for the maximum error it can be concluded that can reach the same optimality by means of a suitable scaling factor . *interestingly , is more accurate than with respect to are .this could be due to the fact that the two approximations take different paths towards minimizing the mre .in this paper , we examined the weighted -cost norm recently proposed by mukherjee with respect to its ability to approximate the euclidean norm in .we evaluated the average and maximum errors of this norm using numerical simulations and compared the results to those of two other well - known euclidean norm approximations .the results demonstrated that , because it was designed for digital geometry applications in , the original weighted -cost norm is not particularly suited to approximate the euclidean norm in .it is also shown , however , that when normalized with an appropriate scaling factor , mukherjee s norm becomes competitive with an analytically optimized approximation with respect to both average and maximum relative errors .this work was supported by grants from the louisiana board of regents ( leqsf2008 - 11-rd - a-12 ) and us national science foundation ( 0959583 , 1117457 ) .the authors are grateful to the anonymous reviewers for their insightful suggestions and constructive comments that improved the quality and presentation of this paper .m. barni , f. bartolini , f. buti , and v. cappellini , `` optimum linear approximation of the euclidean norm to speed up vector median filtering , '' proceedings of the 2nd ieee international conference on image processing , pp . 362365 , 1995 .
mukherjee ( pattern recognition letters , vol . 32 , pp . 824831 , 2011 ) recently introduced a class of distance functions called weighted -cost distances that generalize -neighbor , octagonal , and -cost distances . he proved that weighted -cost distances form a family of metrics and derived an approximation for the euclidean norm in . in this note we compare this approximation to two previously proposed euclidean norm approximations and demonstrate that the empirical average errors given by mukherjee are significantly optimistic in . we also propose a simple normalization scheme that improves the accuracy of his approximation substantially with respect to both average and maximum relative errors .
for the last ten years , the mathematical formalism of quantum theory has been actively applied outside the domain of quantum physics .we have seen numerous applications in decision making ( both in cognitive and social science ) , economics and also finance .see for instance acacio de barros and suppes ( 2009 ) , asano et al .( 2010 ) , bruza et al .( 2005 , 2009a , 2009b ) ; busemeyer et al .( 2006a , 2006b ) ; cheon et al .( 2006 , 2010 ) ; choustova ( 2007 ) , pothos et al . ( 2009 ) , franco ( 2009 ) , haven ( 2006 , 2008a , 2008b , 2009 ) and la mura ( 2008 ) .recently the _ quantum - like _ ( ql ) approach started to be explored in political science .some of the ql features of the * * * * behavior of voters in the us political system were discussed in zorn and smith ( 2011 ) .the authors start with a * * * * comparison of the notions of state separability in conventional models of party governance and in quantum information theory ( see * * * * zorn and smith ( 2011 ) ) and they then show that the ql model might provide a more adequate description of the voters state space ` mental space ' .the authors present a strong motivation of the usage of the complex hilbert space as the voters ` mental space . ' in this paper we present a ql - model describing the dynamics of the voters state ( as represented in the complex hilbert space ) .first , we consider what we could call * * * * ` a free ql - dynamics ' , when a voter is not under the pressure of mass media and the social environment . by applying the quantum approachwe describe the dynamics of her state by using an analogue of the schrdinger equation .a simple mathematical analysis implies that alice s preferences encoded in her state - vector ( ` mental wave function ' ) fluctuate _ without _ stabilization to the definite state .hence , such a dynamics can describe the unstable part of the electorate : those voters who have no firm preferences . in quantum physics , stabilization and damping of fluctuationsis a typical consequence of interaction with the environment .we apply this approach to the problem of the stabilization of fluctuations of voters preferences .an essential part of the paper is devoted to the analysis of the applicability of quantum dynamics to a social system ( e.g. a voter ) which is coupled to the social environment .the main problem is that the exact quantum dynamics of a system coupled to the physical environment is extremely complicated .therefore , to simplify matters , typically a quantum markov approximation is applied .this approximation is applicable under a number of non - trivial conditions ( see ingarden et al .( 1997 ) ) .our aim is to translate these conditions into the language of social science and to analyze their applicability to the dynamics of voters preferences . in this connectionthe quantum markovian dynamics , especially via the quantum master equation , can model ( approximately ) voters preference dynamics .our approach is based on the quantum master equation which describes the interaction of a social system with a ` social bath ' .we use a very general framework which can be applied to a variety of problems in politics , social science , economics , and finance .the main problem of any concrete application is to analyze the conditions of applicability of the quantum master equation ( the quantum markov approximation ) to the corresponding problem in decision making .we remark that the work of fiorina ( 1996 ) played an important role in the motivation of the quantum model based on the use of entangled quantum states ( see zorn and smith ( 2011 ) for the two institutional choices in u.s .politics the congress and the presidency ) .zorn and smith ( 2011 ) also present a detailed analysis of the inter - relation between classical and quantum models .such an analysis is very important to attract the interest of mainstream researchers in decision making to quantum models . for such researchers ,the applications of the quantum formalism to social science may on prima facie be considered as quite exotic .therefore , in this paper , we begin with an extended section in which we compare classical and quantum probabilistic approaches to decision making .our aim is not only to stress the differences , but also to find the commonalities .our findings argue for an important degree of similarity between quantum and subadditive probabilistic descriptions of decision making .we also emphasize the vital role of contextuality .one of the basic tools of probabilistic investigations in psychology , cognitive science , economics and finance is bayesian analysis ( see * * * * de finetti ( 1972 ) and kreps ( 1988 ) ) which allows for a process of mental updating of probabilities ( objective or subjective depending on the interpretation ) on the basis of newly collected statistical data .bayesian probability can be distinguished to be objective ( independent of the individual who makes a decision ) or subjective , that is to say , related to the * * * * personal belief of an individual ( see * * * * de finetti ( 1974 ) ) .the objective probabilities represent the choice that rational agents should make in the light of an objective situation and updating occurs as a consequence of the appearance of any new event ( see chalmers ( 1999 ) ) . by this approachthe agents are supposed to distribute the prior probabilities equally on the basis of some principle of indifference . in particular, the bayesian approach plays an important role in classical decision making ( see de finetti ( 1972 ) ) .we stress that this method is a part of conventional ( ` classical ' ) probability theory based on kolmogorov s axiomatics ( 1950 ) . the bayes formula for conditional probabilities is: where the law of total probability forms an integral part of the classical bayesian approach .let us consider the law of total probability in the simplest situation .consider an event and its complement and assume that the probabilities of both these events are positive .then , for any event the following formula ( of total probability ) holds : we note that for _ quantum probabilities _ , the law of total probability is violated ! in general , the difference between the left - hand and right - hand sides of ( 2 ) is nonzero .this difference is nothing else than the influence of the interference term , which plays a fundamental role in quantum theory ( as well as in classical physical wave theories ) .the quantum analog of the law of total probability has the form : depending on the sign of one observes constructive ( or destructive ( interference . in the first casethe probability to observe some phenomenon increases so much that it can not be explained by the laws of classical probability theory . in the second case onesimilarly finds a ` mystical ' decreasing of probability ( e.g. , probabilities can result in a zero probability , in the case the quantum formula of total probability ( the formula containing thus an interference of probabilities ) is reduced to the classical law of total probability .this is a very important point of transition from usage of the classical probabilistic model to the quantum probabilistic model . by decreasing the absolute value of interference coefficient , the lattercan be transformed into the former ( as the coefficient vanishes ) .thus the quantum probabilistic models in cognitive science , psychology , and social science are natural extensions of the classical models .if the deviation of the left - hand side of equation ( 3 ) from the right - hand side is relatively small , we can ignore the interference contribution and proceed with the classical law of total probability . in summary, we can think of the quantum approach of decision making as a natural generalization of the bayesian approach which is based on the transition from the classical formula of total probability to its quantum analogue .the following questions naturally arise when considering the use of equation ( 3 ) as a new tool in decision making : 1 .is the departure from equation ( 2 ) to equation ( 3 ) a totally new step in the development of probabilistic modeling in social science ? 2 .are there other conventional social models based on departures from the laws of classical probability ?surprisingly for those who argue for the exceptional novelty of the quantum approach to social problems , the answer is ` yes ' . in mainstream studies in cognitive science , psychology , behavioral economics and finance , non - classical probability has been actively used during many years .comparing the quantum approach with the traditional non - classical probabilistic approaches is not a straightforward task . in ql modelsthe _ violation of the law of total probability _ is considered as the crucial point . however , the majority of traditional non - classical models are not based on the aforementioned violation of equation(2 ) , but rather on an * * * * application of _ subadditive probabilities . _hence , the * * * * _ violation of the law of additive probability _ has already been * * * * actively discussed in social science .khrennikov and haven ( 2007 ) indicate ( p. 23 ) that * * * * when experiment participants have to express their degree of beliefs on a [ 0 , 1 ] interval , probabilistic additivity will be violated in many cases and subadditivity obtains .see bearden et al .( 2005 ) for a good overview .khrennikov and haven ( 2007 ) continue as follows ( p. 23 - 24 ) : bearden et al .( 2005 ) also indicate that such subadditivity has been obtained with experiment participants belonging to various industry groups , such as option traders for instance ( fox et al .( 1996 ) ) .the key work pertaining to the issue of subadditivity in psychology is by * * * * tversky and koehler ( 1994 ) and rottenstreich and tversky ( 1997 ) .their theory , also known under the name of ` support theory ' is in the words of tversky and koehler ( 1994 ) ` ... a theory in which the judged probability of an event depends on the explicitness of its description . ' in other words , it is not the event which is important as such but its description . in tversky and koehler ( 1994 ) the authors highlight the ` current state of affairs' ... on the various interpretations that subjective probability may have . amongst the interpretationsis zadeh s ( 1978 ) possibility theory and the upper and lower probability approach of suppes ( 1974 ) .the paper of dubois and prade ( 1998 ) , also mentioned in tversky and koehler ( 1994 ) , provides for an excellent overview on non - additive probability approaches . to couple ql models based on the violation of the law of total probability and bayesian probability , with traditional studies based on subadditive probabilities ,we need to recall that the mathematical derivation of the formula of total probability is based on the additivity of probability and the bayes formula for conditional probabilities .therefore there are two possible sources of violation of equation ( 2 ) : i ) subadditivity and ii ) the non - bayesian definition of conditional probability .both these sources exhibit themselves in ql - models .hence , the subadditivity of probability is an important common point of the ql and traditional ( based on non - classical probability ) approaches .moreover , many experts in quantum physics especially stress the role of subadditivity of quantum probability as the main source of quantum interference ( feynman and hibbs ( 1965 ) .the ql approach can be considered as a special mathematical model describing the usage of subadditive probability in social science .it is not clear whether any social science based model with subadditive probability can be embedded in the ql - approach .the quantum probabilities have a very special structure : they are based on complex probability amplitudes , vectors from a complex linear space and probability is obtained from a squared complex amplitude . _it is not clear whether any subadditive probability from the aforementioned social science based _ _ models can be represented in this way ._ nevertheless , even if it might imply the loss of generality , the use of the linear space representation simplifies the operation with probabilities .furthermore , it provides us with a possibility to use a powerful mathematical apparatus of quantum mechanics in an interdisciplinary way . in this paper , we intend to explore quantum dynamical equations .we stress that the form of these equations depend very much on whether interaction with the environment is taken into account or neglected . herewe are merely interested in the application of quantum dynamics to the modeling of the evolution of the mental state of a human being interacting with an extremely complex social environment .the complexity of the actual environment is so high that it strongly influences the decision making process of an individual , finally implying a resolution from superposition of his / her mental states .in quantum terms a decoherence takes place . bayesian probability according to maher ( 2010 ) ( p. 120 )explicates a kind of rationality we would like our choices to have ... correspondingly , the ` absolute rational choice ' maher ( 2010 ) ( p.120 ) refers to , can be understood as the maximization of expected utility .the bayesian updating of probabilities and the validity of the law of total probability have a direct coupling with the problem of rationality in decision making .von neumann and morgenstern s ( 1944 ) expected utility theory , and savage s sure thing principle ( savage ( 1954 ) ) postulate a complete rationality ( i.e. a maximization of one s own payoff and the minimizing of one s own losses ) .savage ( 1954 ) ( p. 21 ) proposed the so called * * * * sure thing principle ( stp) , denoting that : if a person would not prefer [ a decision ] to either knowing that the event obtained , or knowing that the event obtained , then he does not prefer to [ whether knowing or not if the event or happened ] . savage ( 1954 ) * * * * illustrate**s * * the validity of the principle with an example of a businessman , who considers whether to buy some property before the presidential elections or not .savage ( 1954 ) ( p. 21 ) describes the situation of a businessman who is uncertain if the republicans or democrats will win the election campaign .he decides that he would buy the property if the republicans win , but also he decides that he should buy the property * * * * even if the democrats win . by taking the decision to buy in any case ( for example the decision ) ,it is natural to assume that the businessman will buy the property being uncertain of whether the republicans ( event or democrats will win ( event .the principle could be statistically represented with help of the formula of total probability ( equation ( 2 ) ) , where the events and are assigned some probability and the decision ( here depicted as to be consistent with savage s symbols ) would be a conditional probability of and so that the exact statistical probability for the possible decision could be obtained . in this illustrationwe see that the conditional probability of would be equal to one ( i.e. there is confidence about the purchase of a house ) . according to croson ( 1999 ) the event can be as well i ) * * * * an exogenous risk : the uncertainty about the state of nature ( e.g. the property purchase ) as well as ii ) * * * * a strategic risk : an uncertainty about the choice of a strategic opponent ( e.g. a competitor starts a price war ) .croson ( 1999 ) describes such a pattern of decision making as ` consequential reasoning ' , as the individual considers the consequences ( for instance the amount of the payoffs ) before considering a particular action .savage s sure thing principle has been regarded as a foundation axiom for decision making in economics .kreps ( 1998 ) ( p. 120 ) called savage s principle the crowning glory of choice theories .however , many experiments , such as allais ( 1953 ) , tversky and shafir ( 1992 ) , croson ( 1999 ) proved that economic decision makers in general tend to violate the savage sure thing principle and expected utility theory .for example , in a prisoner dilemma type game experiment , violation of the rationality postulate of savage s sure thing principle was found in experiments performed by tversky and shafir ( 1992 ) and later repeated by croson ( 1999 ) and busemeyer et al .( 2006 ) .traditionally , this game is played in three conditions . in the ` unknown ' condition the player acts without knowing the opponent s action . in the known ` defect condition ' , the player knows that the opponent has defected before he / she acted . in the known ` cooperate condition 'the player knows that the * * * * opponent has cooperated , before he / she acted .see also tversky and shafir ( 1992 ) and pothos and busemeyer ( 2009 ) .we cite tversky and shafir ( 1992 ) ( p. 309 ) the subjects ... played a series of prisoners dilemma games , without feedback , each against a different unknown opponent supposedly selected at random from among the participants . in this setupthe rate of cooperation was 3% when subjects knew that the subject knew that the opponent has defected and 16% when they knew that the opponent has cooperated .however , when the subjects did not know whether their opponent had cooperated or defected ( as is normally the case of the game ) [ condition of uncertainty ] ) the rate of cooperation rose to 37% .this experiment showed that when the players are unaware of their opponents actions , they do not behave rationally as they are supposed to do in a conventional prisoners dilemma game .this anomaly in behavior occurred in other games of the prisoners dilemma type and also in hawaiian vacation experiments .the basic effect those experiments have in common is referred to by tversky and shafir ( 1992 ) and croson ( 1999 ) as the ` disjunction effect ' .busemeyer et al . ( 2006 ) show that the disjunction effect is equivalent to the * * * * violation of the law of total probability .since this law is violated by ql models , all such models in social science exhibit the disjunction affect . in the quantum communitythere is still no consensus on the basic roots of ` quantum mysteries ' ; in particular , the grounds for the violation of the laws of classical probability theory .one hundred years after the creation of quantum mechanics ( the 1920s-1930 s starting with the founders of quantum mechanics : bohr , heisenberg and einstein ) , the intensity of debates about its foundation have not abated .we may even claim the debates are more intense .one of the possible sources of the quantum mysteries is the notion of contextuality .the viewpoint that the results of quantum observations depend crucially on the measurement context was proposed by niels bohr , who emphasized that we are not able to approach the micro world ( with the aid of our measurement devices ) without bringing essential disturbances into its state .the quantum systems are too sensitive to the measurement apparata .the context of measurement plays an essential ( depending on the interpretation , even crucial ) role in forming the result of our measurement . according to the fundamental interpretation of quantum mechanics , the ` copenhagen interpretation ', quantum systems do not have objective properties which exist independently of ` questions ' asked to these systems in the context of measurement . says suppes ( 1974 ) ( p. 171 - 172 ) * : * any time we measure a microscopic object by using macroscopic apparatus we disturb the state of the microscopic object and , according to the fundamental ideas of quantum mechanics , we can not hope to improve the situation by using new methods of measurement that will lead to exact results of the classical sort for simultaneously measured conjugate variables .the contextual viewpoint is attributed to the origin of non - classical probabilistic behavior of quantum systems and is very attractive for those who already apply or aim to apply a quantum formalism in other domains outside physics .it is important to stress that the contextual interpretation of quantum mechanics is more ` innocent ' than other essentially more exotic viewpoints , such as the quantum non- locality concept or the ` many worlds ' interpretations .the majority of people working in cognitive science and psychology would not accept a possibility of non - local interactions between human beings , e.g. through a splitting of reality in many worlds .the concept of contextuality is a well known feature of cognitive systems .we also see the origin of non - bayesian ( ` irrational ' ) decision making in the contextuality of observations performed for mental quantities , including self - observations .hence , the value of the subjective probability does not exist independently of the measurement context , only whilst ` asking about someone s preferences ' including ourselves , we create them .for example , in semantics studies context is treated by representing it as cue words , or co- appearing words . this semantic contextuality ( well known and actively explored in traditional semantic models )was used as the starting point for the development of ql model**s * * of word recognition ( see bruza et al .( 2005 , 2009a , 2009b ) .we also remark that contextual models of reasoning play an important role in artificial intelligence ( see f.i .giunchiglia ( 1993 ) , mccarthy ( 1993 ) .we now come back to the problem of rationality in decision making .we remark that contextuality of reasoning is closely coupled with the so called ` framing effect ' .kreps ( 1988 ) remarks ( p. 197 ) that the way in which a decision problem is framed or posed can affect the choices made by decision makers . according to tversky and kahnemann ( 1981 ) the term ` decision frame ' refers ( p. 453 ) to the decision - maker s conception of the acts , outcomes and contingencies associated with a particular choice .one of the most important contributions of the ql approach to the problem of contextual reasoning is the recognition of the existence of incompatible contexts and the use of well developed quantum tools for testing incompatibility , such as heisenberg s uncertainty relation or the violation of bell s inequality ( see f.i .khrennikov and haven ( 2007 ) ) .in particular , in the * * * * prisoner s dilemma game the contexts ( the decision of the partner is known ) , and ( information is absent ) , are incompatible .consequently , the ql approach is about : 1 .the violation of the sure thing principle , 2 .` irrational ' decision making , 3 .non - bayesian decision making , and 4 . the usage of subadditive probability .all these problems have already been widely discussed in traditional approaches to cognitive science , psychology , behavioral economics and finance .the ql approach is just one of the mathematical models which accurately describes all of the above effects .finally , we point to one of the pioneer papers that assigned quantum - like contextuality to the measurement of belief in decision making theories .suppes ( 1974 ) conjectured that general concepts taken from quantum mechanics could provide for the measurement of belief .he also explained the importance of the particular measurement context , by asserting that ( p. 172 ) : it is a mistake to think of beliefs as being stored in some fixed and inert form in the memory of a person .when a question is asked about personal beliefs , one constructs a belief coded in a belief statement as a response to the question . as the kind of question varies , the construction varies , and the results vary . ** w**e could articulate that the notion of measurement context , borrowed from quantum mechanics can be regarded as one of the promising theories of measurement of belief .with the help of the above mentioned features of ql models we now attempt to describe the dynamics of the process of decision making within the problem setting of party governance in the us - type two party system .this system allows voters to cast partisan ballots in two contests : executive and legislative . byso doing they can thus choose for instance ` republican ' in one institutional choice setting and ` democratic ' in the other ( see zorn and smith ( 2011 ) ) .it is well known from physics that the quantum state dynamics are described by schrdinger s equation .this type of dynamics is unitary . roughly speaking itis combined of a family of rotations and in principle , this family can be infinite .pothos and busemeyer ( 2009 ) applied this equation to model the dynamics of the process of decision making in games of the prisoner s dilemma type .however , it is questionable whether we can describe the dynamics of voters expectation by the schrdinger s equation .this equation describes the dynamics of an isolated system , i.e. , a system which does not interact with the environment .a voter in the context of the election campaign definitely can not be considered as an isolated social system .she , say alice , is in permanent contact with mass media ( whether tv or internet ) .such an influence of the environment induces random fluctuations of opinions and choices in alice s mind . for the purposes of our research ,we are interested in the ` unstable ' part of the electorate which is composed of citizens who have no concrete opinions and who will make their electoral choice very close to the actual day of the elections ( see zaller and feldman ( 1992 ) ) .if alice could be considered as an isolated social system , then the only possibility to describe a transition from the mental state of superposition of choices to the state corresponding to the concrete choice was to use the projection postulate of quantum mechanics ( the so called ` von neumann postulate ' ) .this state reduction process , from superposition to one of its components , is called _ _ the state collapse__. such collapse is imagined as an instantaneous ( the jump - type ) transition from one state to another . the state collapse might be used to describe the situation in which alice makes her choice precisely at the moment of completing the voting bulletin .this type of behavior can not be completely excluded from consideration , but such a case is probably not statistically significant . moreover, mainstream quantum mechanical thought will tell us that the state collapse occurs when an isolated system driven by schrdinger s equation interacts practically instantaneously with a measurement device .thus when alice is totally isolated from the election campaign , she is suddenly asked to make her choice .it is evident that the process of decision making for the majority of the ` unstable population ' in the electorate differs in essential ways from this collapse - type behavior. therefore , let us take more seriously the role which the social environment plays in the process of decision making .we apply to social science the theory of _ open quantum systems _ , i.e. , systems which interact with a large thermostat ( ` bath ' ) .since a bath is a huge physical system with millions of variables ( the complexity of the social bath around an american citizen who will cast his / her vote in the election campaign is huge ) , it is in general impossible to provide a reasonable mathematical description of the dynamics of a quantum system interacting with such a bath .physicists proceed under a few assumptions which allow then for the possibility to describe those dynamics in an approximate way . in quantum physicsthe interaction of a quantum system with a bath is described by a quantum version of the master equation for markovian dynamics .the quantum markovian dynamics are given by the _ gorini - kossakowski - sudarshan - lindblad _( gksl ) equation .see e.g. ingarden et al .( 1997 ) for details .this gskl equation * * * * is the most popular approximation of quantum dynamics in the presence of interaction with a bath .we briefly * * * * remind the origin**s * * of the gksl - dynamics .the starting point is that the state of a composite system , a quantum system combined with a bath , is a pure quantum state , complex vector the evolution of * * * * is described by schrdinger s equation .this is an evolution in a hilbert space of a huge dimension , since a bath has so many degrees of freedom .the existence of the schrdinger dynamics in the huge hilbert space has a merely theoretical value .observers are interested in the dynamics of the state of the quantum system the next fundamental assumption in the derivation of the gksl - equation is the markovian character of the evolution , i.e. the absence of long term memory effects .it is assumed that interaction with the bath destroys such effects .thus , the gksl - evolution is a markovian evolution .finally , we point to the condition of the ` factorizability ' of the initial state of a composite system ( a quantum system coupled with a bath ) , where is the sign of the tensor product .physically factorization is equivalent to the absence of correlations .one of the distinguishing features of the evolution under the mentioned assumptions is the existence of one or few _ equilibrium points . _the state of the quantum system stabilizes to one of such points in the process of evolution : a pure initial state , a complex vector is transformed into a mixed state , a density matrix ( classical state without superposition effects ) .in contrast to the gksl - evolution , the schrdinger evolution does _ not _ induce stabilization .any solution different from an eigenvector of the hamiltonian will oscillate for ever .another property of the schrdinger dynamics is that it _always _ transfers a pure state into a pure state , i.e. , a vector into a vector : quantumness if it was originally present in a state ( in the form of superposition ) can not disappear in the process of a continuous dynamical evolution .the transition from quantum indeterminism to classical determinism can happen only as the result of the collapse of the quantum state .on the one hand , in our model of the decision making for party governance we would like to avoid the usage of the state collapse .on the other hand , to make a decision , alice has to make a transition from a quantum to a classical representation of her preferences .we note that in quantum physics all experimentally obtained information is classical as well .the gksl - evolution provides for such a possibility ( and without ` quantum jumps ' ) .alice s mental state evolves in a * * * * smooth way ( fluctuations exist but they are damped ) to the final classical decision state .we now list the social conditions corresponding to the above mentioned physical conditions .this will allow us for a possibility to apply the gksl - equation : * ( * compl * ) _ complexity : _ the social environment ( election bath ) influencing a voter has huge complexity * ( * free * ) _ freedom : _ the mental state of a society under consideration is a pure ql state , i.e. , * a * superposition of various opinions and expectations ; * ( * dem * ) _ democracy _ : the feedback reaction of a voter to the election bath is negligibly small , it can not essentially change the mental state of the bath ; * ( * sep * ) _ separability _ : before the start of the election campaign a voter was independent of the election bath ; * ( * mark * ) _ markovness _ : a voter does not use a long range memory on interaction with the election bath to update her state .we surely need to make some comments on those assumptions . 1 .the assumption ( * compl * ) , _ complexity _ , is definitely justified .nowadays an election campaign has huge information complexity : the richness of media sources accounts for such complexity .we can even speculate that the proposed ql model is more adequate than say 50 years ago : the phenomenal increase of information complexity makes the usage of the ( quantum , quantum - like ) open systems approach more reasonable .the ( * free * ) , _ freedom _ , can be interpreted as guaranteeing the freedom of political opinions .the opposite to the ( * free*)-society , is a totalitarian society where its mental state is a classical state in which all superpositions have been resolved ( collapsed ) .the ( * dem * ) , _ democracy _ , encodes the democratic system : one voter can not change the mental state of society in a crucial way .the ( * sep * ) , _ separability _ , describes a sample of voters who are not that interested in politics : they will determine their positions through an interaction with the election bath during the election campaign .this part of the electorate is the most interesting from the point of view of political technologies .the ( * mark*)-assumption , _ markovness _ , also reflects the fact that voters under study are not that interested in politics .they do not spend a lot of time analyzing the dynamics of the election campaign .however , they are not isolated from the election bath ; they watch tv , read newspapers and use the internet . from a pragmatic point of view , they unconsciously update their mental states each day by taking into account recent news .[ markovness]we remark that the markovness of the dynamics may induce the impression that voter s preferences would fluctuate forever . however , this is not the case .the mathematical formalism of quantum mechanics implies that quantum markovean fluctuations stabilize to steady solutions . in physics , this theoretical prediction was confirmed by numerous experiments .although the social counterparts of physical assumptions seem to be natural and this motivates the applicability of our theoretical model , the final justification can come only from the testing of our hypothesis by experimental data .this is a very complex problem .[ decoherence]in quantum physics the process of transformation of a pure ( superposition - type ) state into a classical state ( given by a diagonal density matrix ) is called decoherence . a proper interpretation of this process is still one of the hardest problems _ in the _ foundations of quantum mechanics .some authors present the viewpoint that superposition is in some way conserved : the disappearance of superposition in a subsystem increases it in the total system . in our modelthis would mean that the determination of states of voters in the process of interaction with the election bath will _ transfer political uncertainty into an increase of political uncertainty in society in general , after elections ._ at the moment it is not clear whether this interpretation is meaningful in social sciences .the state space of a voter ( alice ) can be represented as the tensor product of two hilbert spaces ( each of them is two dimensional ) .one hilbert space describes the election to the congress , and we denote it by the symbol and another describes the presidential election , denote it by the symbol in each of them we can select the basis corresponding to the definite strategies if alice was thinking only about the election to congress , her mental state would be represented as the superposition of these two basis vectors : where are complex numbers and they are normalized by the condition : by knowing the representation of equation ( [ cong ] ) one can find the probabilities of intentions to vote for democrats and republicans in the election to the congress : however , the quantum dynamics of the state , in the absence of interactions with the political bath ( environment ) , see equation ( [ sch1 ] ) below ` social schrdinger equation ' , is such that the probabilities fluctuate. therefore , even if alice wanted to vote for republicans at in the process of mental evolution she will change her mind many times . in the same way ,if alice w**as * * thinking only about the election of the president , her mental state would be represented as * a * superposition of the two basis vectors where the corresponding probabilities are given by for a moment , let us forget about the quantum model and turn to classical probability theory .suppose that the classical probabilities , , , are given .furthermore , suppose that voters do not have any kind of correlations between two elections : their choice in the election to the congress does not depend on their choice of the president and vice versa. in this case independence implies factorization of the joint probability distribution : however , in the case of non - trivial correlations between the congress- and president - elections , the factorization condition is violated . in the quantum formalism, the models described by the two hilbert spaces are unified in the model described by the tensor product of these two spaces . in our casewe use the space its elements are of the form : which describe the states corresponding to uncorrelated choices in two elections . in quantum informationsuch states are called _ separable . _ in general a state can not be factorized .nonseparable states describe correlations between choices in the two elections : where and the main point of usage of the quantum formalism is that quantum correlations are not reduced to classical correlations ( as * * * * described in the framework of the kolmogorov model ) .roughly speaking the quantum correlations can be stronger than the classical correlations .this is the essence of bell s theorem , bell ( 1987 ) .we also state again that the question of inter - relation between quantum and classical separability in the election framework was studied in zorn and smith ( 2011 ) .however , the authors * * * * did not appeal directly to bell s theorem , but to the more delicate condition of quantum ( non- ) separability .its role in social science was emphasized by bruza et al .( 2010 ) .the quantum dynamical equation has the form : where is the operator of energy , the hamiltonian , and is the planck constant .the mental interpretation of an analog of the planck constant is a complicated problem. we shall interpret it as the time scale parameter in this paper we do not want to speculate on such a controversial topic as mental energy ( but see however , choustova ( 2007 ) ) .therefore , we proceed formally by considering the evolution generator as a dimensionless quantity . ] .since the usage of the symbol may be a source of misunderstanding ( especially for physical science educated readers ) , we shall use a new scaling parameter , say having the dimension of time ( please see the preceding footnote ) .it determines the time scale of updating of the mental state of alice during the election campaign .we rewrite the dynamical equation as: and we call the operator , the * * * * _ decision hamiltonian . _ the most general hamiltonian in the space of mental states in the two - party systems ( wherein voters can cast partisan ballots in two contests , executive and legislative ) has the form where is the part of the hamiltonian responsible for the stability of the distribution of opinions about various possible selections of decisions .it is given by and is the part of hamiltonian responsible for flipping from one selection of the pair of strategies ( for executive and legislative branches ) to another .it is given by to induce a unitary evolution , the hamiltonian has to be hermitian .this induces the following restrictions to its coefficients : are real and in the absence of the -component , the probabilistic structure of superposition is preserved .only phases of choices evolve in the rotation - like way , e.g. , evolves as which corresponds to * a * rotation " of the strategy for the angle * a * larger induces quicker rotation .the meaning of such rotations of mental states has to be clarified in the process of the model s development .we can speculate that the coefficient correspond to the speed of self - analysis ( by alice ) of the choice in the presence of the flipping component the distribution of probabilities of choices of various strategies changes in the process of evolution . such flipping from one strategy to anothermakes the state dynamics really quantum .in fact , for political technologies per s , the most important component is the flipping part of the hamiltonian . of course , at the moment we proceed at * a * very abstract theoretical level .however , one may hope to develop the present ql model to the level of real applications .suppose that alice has neither a firm association with democrats nor with republicans , i.e. , the diagonal elements of the decision hamiltonian are equal to zero .suppose also that the flipping part of the hamiltonian contains only the transition: which expresses the combination ( democrats , republicans ) into the combination ( republicans , democrats ) , and vice versa .let the schrdinger equation has the form of a system of linear ordinary differential equations .the dynamics of coincidence of choices is trivial : hence , however , the presence of a nontrivial transition channel , equation ( [ trch ] ) , induces fluctuations of alice s preferences for choices and here we have the system of two equations : its solutions have the form : physics the dynamics of a system in a bath is described by the quantum analog of the master equation , the gksl - equation , see section 3 .we write this equation by using the time scaling constant instead of the planck constant : +l(\rho(t ) ) ; \label{gksl}%\ ] ] where is a linear operator acting in the space of operators on the complex hilbert space . in the dynamics described by equation ( [ gksl ] ) ,density operators are transformed into density operators .the general form of was found by gorini , kossakowski , sudarshan , and lindblad ( see , for example , ingarden et al .( 1997 ) . fornow , we are not interested in ( the rather complex ) structure of for our applications , it is sufficient to know that it can be expressed through matrix multiplication for a family of matrices .the simplest dynamics of interaction of alice with the two party election campaign is determined by two matrices and corresponding to advertising of democrats and republicans , respectively . under natural selection of the matrices any solution of this equation stabilizes to a diagonal density matrix this matrix describes the distribution of firmly established decisions for voting strategies where the density matrix describes a population of voters who finally determine their choices . denote the number of people in this population by there are then ( approximately ) people in the mental state ... and people in the mental state for example , people in the mental state ha**ve * * firmly selected to vote for democrats both in the executive and legislative branches .their decision is stable . from a pragmatic point of viewthere is no possibility to manipulate opinions of people in this population .consider two populations , say and suppose that in our ql model the first one is described by a pure state and the second one by the density matrix given by equation ( [ density ] ) .moreover , suppose that the complex amplitudes given by the coefficients in the expansion ( equation ( [ pure ] ) ) produce the same probabilities as the density matrix , i.e. : one may ask : what is the difference ? at first sightthere is no difference at all , since we obtain the same probability distribution of preferences .however , the distributions of mental state in ensembles and are totally different .all people in are in the same state of indeterminacy ( superposition ) they are in doubt .they are ready to change their opinion ( to create a new superposition of opinions ) .the is a proper population for political manipulations . to the opposite of population population consists of people who have already resolved their doubts .their mental states have already been reduced to states of the form i.e. definite choices .the general theory of quantum master equation**s * * implies that for some important open system dynamics , the limiting probability distribution _ does not depend on the initial state ! _ this mathematical fact has important consequences for our ql model of elections .it tells us that in principle it is possible to create such a quantum open system dynamics ( voters interacting with some election bath ) such that the desired state would be obtained _ independently _ of the initial mental state of alice. this theoretical result may play an important role in ql election technologies .even if a quantum master equation does not ha**ve * * the unique limiting state , there are typically just a few of them . in this case , we can split the set of all pure states ( the unit sphere in the complex hilbert state space ) into clusters of voters . for each cluster, we can predict the final distribution of decisions .we consider only the two dimensional submodel of the general four dimensional model corresponding to a part of the electorate which have double preferences democrats in one of the elections and republicans in another election .so , we reduce the modeling to the subspace with the basis it is assumed that at the beginning ( i.e. , before interaction with the ` election environment ' ) voters are in a superposition of the basic states : we also assume that in the absence of interaction with the ` election campaign ' the state of preferences fluctuations are driven by the schrdinger dynamics considered in example 1 . in the matrix form the corresponding hamiltonian can be written as {ll}% 0 & \;\lambda\\ \lambda & 0 \end{array } \right ) ; \label{be0dj4_p}%\ ] ] where is the parameter describing the intensity of flipping from to and vice versa .the simplest perturbation of this schrdinger equation is given by the lindblad term of the form given by ingarden et al .( 1997 ) : where denotes the operator which is the hermitian adjoint to the operator as always in quantum formalism , which denotes the anticommutator of two operators we select the operator by using its matrix in the basis {ll}% 0 & \;\lambda\\ 0 & 0 \end{array } \right ) ; \ ] ] hence , {ll}% 0 & \;0\\ \lambda & 0 \end{array } \right ) ; \ ] ] where the parameter is responsible for interaction between the voter s state . for simplicity ,the ` election campaign ' is selected in the same way as in the hamiltonian ( [ be0dj4_p ] ) .thus , we proceed with the quantum master equation : +c\rho(t)c^{\ast}-\frac{1}% { 2}\{c^{\ast}c,\rho(t)\}. \label{hhh1}%\ ] ] we present the dynamics corresponding to symmetric superposition , see fig .1 . strongly asymmetric superposition see fig .the interaction with the ` election environment ' plays a crucial role .strong oscillations of the dynamics , given by equations * * * * ( [ trcha1 ] ) , ( [ trcha2 ] ) in the absence of interaction with the ` election bath ' are quickly damped and the matrix elements and stabilize to the definite values .thus the preferences of population of voters who were in fluctuating superposition of choices stabilize under the pressure of the ` election bath ' .we selected such a form of interaction between a voter and the ` election bath ' such that both initial states , the totally symmetric state , i.e. , no preference to nor and the state with very strong preference for the * * * * combination in votes to congress and of president , induce dynamics with stabilization to the same density matrix this example demonstrates the power of the social environment which , in fact , determines the choices of voters .in the the elements determine corresponding probabilities under the pressure of the social environment those who started with a superposition as indicated in equation ( [ sp ] ) increase the -preference and those who started with the superposition in equation ( [ spa ] ) decrease this preference , and the resulting distribution of choices is the same in both populations ( with the initial state ( [ sp ] ) and with the initial state ( [ spa ] ) ) .we stress that manipulation by the preferences described by the dynamics in equation * * * * ( [ hhh1 ] ) in sufficiently smooth .those dynamics are an extension of the ` free thinking ' dynamics given by the schrdinger equation , the first term in the right - hand side of equation * * * * ( [ hhh1 ] ) .hence , in this model the social environment does not prohibit internal fluctuations of individuals , but instead damps them to obtain a ` peaceful ' stabilization .we emphasize that the degree of quantum uncertainty decreases in the process of evolution .one of the standard measures of uncertainty which is used in quantum information theory is given by so called _ linear entropy _( see ingarden et al .( 1997 ) ) defined as: for a pure state ( which has the highest degree of uncertainty ) , the linear entropy it increases with degeneration of purity in a quantum state and it approaches it maximal value for * a * maximally mixed state . herewe consider the two dimensional case ; in the general case where is the dimension of the state space .the dynamics of linear entropy corresponding to the initial states as per equations ( [ sp ] ) and ( 28 ) , respectively , are presented * in * fig .3 and fig 4 .we see that * the * entropy behave**s * * in different ways , but finally it stabilizes to the same value * * t**his value corresponds to a very large decreasing of purity uncertainty of the superposition type . numerical simulation demonstrated that , for other choices of pure initial states , the density matrix and the linear entropy stabilize to the same values .our conjecture is that it may be possible to prove theoretically that this is really the case .however , at the moment we have only results of numerical simulation supporting this conjecture . bruza p.d . and cole r.j .quantum logic of semantic space : an exploratory investigation of context effects in practical reasoning . in : s. artemov , h. barringer , a. s. davila garcez , l.c .lamb , j. woods ( eds . ) we will show them : essays in honour of dov gabbay .college publications .busemeyer j.r ., matthew m. , wang z.a .quantum game theory explanation of disjunction effect . in proc .28th annual conf . of the cognitive science society , ( eds .r. sun and n. miyake ) , pp .131 - 135 .mahwah , nj : erlbaum .la mura p. ( 2008 ) .projective expected utility , in : bruza , p. d. , lawless , w. , van rijsbergen , k. , sofge , d. a. , coecke , b. and clark , s. ( eds . ) , quantum interaction-2 college publications , london , 87 - 93 .
this paper is devoted to the application of the mathematical formalism of quantum mechanics to social ( political ) science . by using the quantum dynamical equations we model the process of decision making in us elections . the crucial point we attempt to make is that the voter s mental state can be represented as a superposition of two possible choices for either republicans or democrats . however , reality dictates a more complicated situation : typically a voter participates in two elections , i.e. the congress and the presidential elections . in both elections he / she has to decide between two choices . this very feature of the us election system requires that the mental state is represented by a 2-qubit state corresponding to the superposition of 4 different choices ( e.g. for republicans in the congress ; for the president as a democrat ) . the main issue of this paper is to describe the dynamics of the voters mental states taking in account the mental and socio- political environment . what is truly novel in this paper is that instead of using schrdinger s equation to describe the dynamics in an absence of interactions , we here apply the quantum master equation . this equation describes quantum decoherence , i.e. , resolution from superposition to a definite choice .
* longitudinal analysis of written language .* allometric scaling analysis is used to quantify the role of system size on general phenomena characterizing a system , and has been applied to systems as diverse as the metabolic rate of mitochondria and city growth .indeed , city growth shares two common features with the growth of written text : ( i ) the zipf law is able to describe the distribution of city sizes regardless of country or the time period of the data , and ( ii ) city growth has inherent constraints due to geography , changing labor markets and their effects on opportunities for innovation and wealth creation , just as vocabulary growth is constrained by human brain capacity and the varying utilities of new words across users .we construct a word counting framework by first defining the quantity as the number of times word is used in year . sincethe number of books and the number of distinct words grow dramatically over time , we define the _ relative _ word use , , as the fraction of the total body of text occupied by word in the same year where the quantity is the total number of indistinct word uses while is the total number of distinct words digitized from books printed in year .both the ( `` types '' giving the vocabulary size ) and the ( `` tokens '' giving the size of the body of text ) are generally increasing over time .+ * the zipf law and the two scaling regimes .* zipf investigated a number of bodies of literature and observed that the frequency of any given word is roughly inversely proportional to its rank , with the frequency of the -ranked word given by the relation with a scaling exponent .this empirical law has been confirmed for a broad range of data , ranging from income rankings , city populations , and the varying sizes of avalanches , forest fires and firm size to the linguistic features of nonconding dna .the zipf law can be derived through the `` principle of least effort , '' which minimizes the communication noise between speakers ( writers ) and listeners ( readers ) .the zipf law has been found to hold for a large dataset of english text , but there are interesting deviations observed in the lexicon of individuals diagnosed with schizophrenia . here , we also find statistical regularity in the distribution of relative word use for 11 different datasets , each comprising more than half a million distinct words taken from millions of books .[ cols="<,^,^,^,^,^,^ " , ] [ tablesummary1 ] figure [ fpdfall ] shows the probability density functions resulting from data aggregated over all the years ( a , b ) as well as over 1-year periods as demonstrated for the year ( c , d ) .regardless of the language and the considered time span , the probability density functions are characterized by a striking two - regime scaling , which was first noted by ferrer i cancho and sol , and can be quantified as }\\ f^{-\alpha_+ } , & \mbox{if } f > f_{\times } \mbox { [ `` kernel lexicon '' ] } \ .\end{cases } \label{pdff}\ ] ] these two regimes , designated `` kernel lexicon '' and `` unlimited lexicon , '' are thought to reflect the cognitive constraints of the brain s finite vocabulary .the specialized words found in the unlimited lexicon are not universally shared and are used significantly less frequently than the words in the kernel lexicon .this is reflected in the kink in the probability density functions and gives rise to the anomalous two - scaling distribution shown in fig .[ fpdfall ] . the exponent and the corresponding rank - frequency scaling exponent in eq .( [ zipfr ] ) are related asymptotically by with no analogous relationship for the unlimited lexicon values and .table [ tablesummary1 ] lists the average and values calculated by aggregating values for each year using a maximum likelihood estimator for the power - law distribution .we characterize the two scaling regimes using a crossover region around to distinguish between and : ( i ) corresponds to and ( ii ) corresponds to .for the words that satisfy that comprise the kernel lexicon , we verify the zipf scaling law ( corresponding to ) for all corpora analyzed .for the unlimited lexicon regime , however , the zipf law is not obeyed , as we find .note that is significantly smaller in the hebrew , chinese , and the russian corpora , which suggests that a more generalized version of the zipf law may be needed , one which is slightly language - dependent , especially when taking into account the usage of specialized words from the unlimited lexicon . + * the heaps law and the increasing marginal returns of new words .* heaps observed that vocabulary size , i.e. the number of distinct words , exhibits a sub - linear growth with document size .this observation has important implications for the `` return on investment '' of a new word as it is established and becomes disseminated throughout the literature of a given language . as a proxy for this return ,heaps studied how often new words are invoked in lieu of preexisting competitors and examined the linguistic value of new words and ideas by analyzing the relation between the total number of words printed in a body of text , and the number of these which are distinct , i.e. the vocabulary size .the marginal returns of new words , quantifies the impact of the addition of a single word to the vocabulary of a corpus on the aggregate output ( corpus size ) . for individual books ,the empirically - observed scaling relation between and obeys with , with eq .( [ hlaw ] ) referred to as `` the heaps law '' .it has subsequently been found that heaps law emerges naturally in systems that can be described as sampling from an underlying zipf distribution . in an information theoretic formulation of the the abstract concept of word cost ,b. mandelbrot predicted the relation in 1961 , where is the scaling exponent corresponding to , as in eqs .( [ pdff ] ) and ( [ zipfpdf ] ) .this prediction is limited to relatively small texts where the unlimited lexicon , which manifests in the regime , does not play a significant role .a mathematical extension of this result for general underlying rank - distributions is also provided by karlin using an infinite urn scheme , and extended to broader classes of heavy - tailed distributions recently by gnedin et al .recent research efforts using stochastic master equation techniques to model the growth of a book have also predicted this intrinsic relation between zipf s law and heaps law .figure [ heapslawfc ] confirms a sub - linear scaling ( ) between and for each corpora analyzed .these results show how the marginal returns of new words are given by which is an increasing function of for .thus , the relative increase in the induced volume of written languages is larger for new words than for old words .this is likely due to the fact that new words are typically technical in nature , requiring additional explanations that put the word into context with pre - existing words .specifically , a new word requires the additional use of preexisting words as a result of both ( i ) the explanation of the content of the new word using existing technical terms , and ( ii ) the grammatical infrastructure necessary for that explanation .hence , there are large spillovers in the size of the written corpus that follow from the intricate dependency structure of language stemming from the various grammatical roles . in order to investigate the role of rare and new words ,we calculate and using only words that have appeared at least times .we select the absolute number of uses as a word use threshold because a word in a given year can not appear with a frequency less than , hence any criteria using relative frequency would necessarily introduce a bias for small corpora samples .this choice also eliminates words that can spuriously arise from optical character recognition ( ocr ) errors in the digitization process and also from intrinsic spelling errors and orthographic spelling variations .figures [ multifceng ] and [ multifcother ] show the relational dependence of and on the exclusion of low - frequency words using a variable cutoff with .as increases the heaps scaling exponent increases from , approaching , indicating that core words are structurally integrated into language as a proportional background .interestingly , altmann et al . recently showed that `` word niche '' can be an essential factor in modeling word use dynamics .new niche words , though they are marginal increases to a language s lexicon , are themselves anything but `` marginal '' - they are core words within a subset of the language .this is particularly the case in online communities in which individuals strive to distinguish themselves on short timescales by developing stylistic jargon , highlighting how language patterns can be context dependent .we now return to the relation between heaps law and zipf s law .table [ tablesummary1 ] summarizes the values calculated by means of ordinary least squares regression using to relate to . for find that for all languages analyzed , as expected from heaps law , but for the value significantly deviates from , and for the value begins to saturate approaching unity .considering that implies for all corpora , figures [ multifceng ] and [ multifcother ] shows that we can confirm the relation only for the more pruned corpora that require relatively large .this hidden feature of the scaling relation highlights the underlying structure of language , which forms a dependency network between the common words of the kernel lexicon and their more esoteric counterparts in the unlimited lexicon . moreover , the function is a monotonically decreasing function for , demonstrating the _ decreasing marginal need _ for additional words as a corpora grows .in other words , since we get more and more `` mileage '' out of new words in an already large language , additional words are needed less and less .+ * corpora size and word - use fluctuations . *lastly , it is instructive to examine how vocabulary size and the overall size of the corpora affect fluctuations in word use . figure [ volumecorpustotalsizefc ] shows how and vary over time over the past two centuries .note that , apart from the periods during the two world wars , the number of words printed , which we will refer to as the `` literary productivity '' , has been increasing over time .the number of distinct words ( vocabulary size ) has also increased reflecting basic social and technological advancement . to investigate the role of fluctuations , we focus on the logarithmic growth rate , commonly used in finance and economics , \label{r2}\end{aligned}\ ] ] to measure the relative growth of word use over 1-year periods , 1 year .recent quantitative analysis on the distribution of word use growth rates indicates that annual fluctuations in word use deviates significantly from the predictions of null models for language evolution .we define an aggregate fluctuation scale , , using a frequency cutoff ] is the minimum corpora size over the period of analysis , and so ] .visual inspection suggests a general decrease in over time , marked by sudden increases during times of political conflict .hence , the persistent increase in the volume of written language is correlated with a persistent downward trend what could be thought of as the `` system temperature '' : as a language grows and matures it also `` cools off '' .since this cooling pattern could arise as a simple artifact of an independent identically distributed ( i.i.d ) sampling from an increasingly large dataset , we test the scaling of with corpora size .figure [ corpustotalsizefcsigmascaling](a ) shows that for large , each language is characterized by a scaling relation with language - dependent scaling exponent .we use $ ] , which defines the frequency threshold for the inclusion of a given word in our analysis .there are two candidate null models which give insight into the limiting behavior of .the gibrat proportional growth model predicts and the yule- simon urn model predicts .we observe , which indicates that the fluctuation scale decreases more slowly with increasing corpora size than would be expected from the yule - simon urn model prediction , deducible via the `` delta method '' for determining the approximate scaling of a distribution and its standard deviation .to further compare the roles of the kernel lexicon versus the unlimited lexicon , we apply our pruning method to quantify the dependence of the scaling exponent on the fluctuations arising from rare words .we omit words from our calculation of if their use in year falls below the word - use threshold .[ corpustotalsizefcsigmascaling](b ) shows that increases from values close to 0 to values less than 1/2 as increases exponentially .an increasing confirms our conjecture that rare words are largely responsible for the fluctuations in a language .however , because of the dependency structure between words , there are residual fluctuation spillovers into the kernel lexicon likely accounting for the fact that even when the fluctuations from the unlimited lexicon are removed .a size - variance relation showing that larger entities have smaller characteristic fluctuations was also demonstrated at the scale of individual words using the same _ google n - gram _dataset .moreover , this size - variance relation is strikingly analogous to the decreasing growth rate volatility observed as complex economic entities ( i.e. firms or countries ) increase in size , which strengthens the analogy of language as a complex ecosystem of words governed by competitive forces .further possible explanations for is that language growth is counteracted by the influx of new words which tend to have growth - spurts around 30 - 50 years following their birth in the written corpora .moreover , the fluctuation scale is positively influenced by adverse conditions such as wars and revolutions , since a decrease in may decrease the competitive advantage that old words have over new words , allowing new words to break through .the globalization effect , manifesting from increased human mobility during periods of conflict , is also responsible for the emergence of new words within a language .a coevolutionary description of language and culture requires many factors and much consideration .while scientific and technological advances are largely responsible for written language growth as well as the birth of many new words , socio - political factors also play a strong role .for example , the sexual revolution of the 1960s triggered the sudden emergence of the words `` girlfriend '' and `` boyfriend '' in the english corpora , illustrating the evolving culture of romantic courting .such technological and socio - political perturbations require case - by - case analysis for any deeper understanding , as demonstrated comprehensively by michel et al . .here we analyzed the macroscopic properties of written language using the _ google books _ database .we find that the word frequency distribution is characterized by two scaling regimes .while frequently used words that constitute the kernel lexicon follow the zipf law , the distribution has a less - steep scaling regime quantifying the rarer words constituting the _ unlimited lexicon_. our result is robust across languages as well as across other data subsets , thus extending the validity of the seminal observation by ferrer i cancho and sol , who first reported it for a large body of english text .the kink in the slope preceding the entry into the unlimited lexicon is a likely consequence of the limits of human mental ability that force the individual to optimize the usage of frequently used words and forget specialized words that are seldom used .this hypothesis agrees with the `` principle of least effort '' that minimizes communication noise between speakers ( writers ) and listeners ( readers ) , which in turn may lead to the emergence of the zipf law . using an extremely large written corpora that documents the profound expansion of language over centuries , we analyzed the dependence of vocabulary growth on corpus growth and validate the heaps law scaling relation given by eq .furthermore we systematically prune the corpora data using a word occurrence threshold , and comparing the resulting value to the value , which is stable since it is derived from the `` kernel '' lexicon .we conditionally confirm the theoretical prediction , which we validate only in the case that the extremely rare `` unlimited '' lexicon words are not included in the data sample ( see figs .[ multifceng ] and [ multifcother ] ) .the economies of scale ( ) indicate that there is an _ increasing marginal return _ for new words , or alternatively , a _ decreasing marginal need _ for new words , as evidenced by allometric scaling .this can intuitively be understood in terms of the increasing complexities and combinations of words that become available as more words are added to a language , lessening the need for lexical expansion . however , a relationship between new words and existing words is retained .every introduction of a word , from an informal setting ( e.g. an expository text ) to a formal setting ( e.g. a dictionary ) is yet another chance for the more common describing words to play out their respective frequencies , underscoring the hierarchy of words .this can be demonstrated quite instructively from eq .( [ marginal ] ) which implies that for that , meaning that it requires a quantity proportional to the vocabulary size to introduce a new word , or alternatively , that a quantity proportional to necessarily results from the addition . though new words are needed less and less, the expansion of language continues , doing so with marked characteristics . taking the growth rate fluctuations of word use to be a kind of temperature, we note that like an ideal gas , most languages `` cool '' when they expand .the fact that the relationship between the temperature and corpus volume is a power law , one may , loosely speaking , liken language growth to the expansion of a gas or the growth of a company .in contrast to the static laws of zipf and heaps , we note that this finding is of a dynamical nature .other aspects of language growth may also be understood in terms of expansion of a gas . since larger literary productivity imposes a downward trend on growth rate fluctuations which also implies that the ranking of the top words and phases becomes more stable productivity itselfcan be thought of as a kind of inverse pressure in that highly productive years are observed to `` cool '' a language off . also , it is during the `` high - pressure '' low productivity years that new words tend to emerge more frequently .interestingly , the appearance of new words is more like gas condensation , tending to cancel the cooling brought on by language expansion .these two effects , corpus expansion and new word `` condensation , '' therefore act against each other . across all corporawe calculate a size - variance scaling exponent , bounded by the prediction of ( gibrat growth model ) and ( yule - simon growth model ) . in the context of allometric relations , bettencourt et al . note that the scaling relations describing the dynamics of cities show an _ increase _ in the characteristic pace of life as the system size grows , whereas those found in biological systems show _ decrease _ in characteristic rates as the system size grows .since the languages we analyzed tend to `` cool '' as they expand , there may be deep - rooted parallels with biological systems based on principles of efficiency .languages , like biological systems demonstrate economies of scale ( ) manifesting from a complex dependency structure that mimics a hierarchical `` circulatory system '' required by the organization of language and the limits of the efficiency of the speakers / writers who exchange the words . 99 , + http://books.google.com/ngrams .evans , j. a. and foster , j. g. metaknowledge . , 721725 ( 2011 ) .ball , p. .( springer - verlag , berlin , 2012 ) .helbing , d. , balietti , s. how to create an innovation accelerator .phys . j. special topics _ * 195 * , 101136 ( 2011 ) .lazer , d. , et al . computational social science ., 721723 ( 2009 ) .barabsi , a. l. the network takeover ., 1416 ( 2012 ) .vespignani , a. modeling dynamical processes in complex socio - technical systems ., 3239 ( 2012 ) .michel , j .- b . , et al . quantitative analysis of culture using millions of digitized books ., 176182 ( 2011 ) .petersen , a. m. , tenenbaum , j. , havlin , s. , and stanley , h. e. statistical laws governing fluctuations in word use from word birth to word death ., 313 ( 2012 ) .gao , j. , hu , j. , mao , x. , and perc , m. culturomics meets random fractal theory : insights into long - range correlations of social and natural phenomena over the past two centuries ., 19561964 ( 2012 ) .zipf , g. k. .addison - wesley , cambridge , ma , ( 1949 ) .tsonis , a. a. , schultz , c. , and tsonis , p. a. zipf s law and the structure and evolution of languages . , 1213 ( 1997 ) .serrano , m. . ,flammini , a. , and menczer , f. modeling statistical properties of written text . , e5372 ( 2009 ) . ,r. and sol , r. v. two regimes in the frequency of words and the origin of complex lexicons : zipf s law revisited ., 165173 ( 2001 ) . ,r. the variation of zipf s law in human language . , 249257 ( 2005 ) . , r. and sol , r. v. least effort and the origins of scaling in human language ., 788791 ( 2003 ) .baek , s. k. , bernhardsson , s. , and minnhagen , p. zipf s law unzipped ., 043004 ( 2011 ) .heaps , h. s. .( academic press , new york , 1978 ) .bernhardsson , s. , , l. e. , and minnhagen , p. the meta book and size - dependent properties of written language ., 123015 ( 2009 ) .bernhardsson , s. , , l. e. , and minnhagen , p. size - dependent word frequencies and translational invariance of books ., 330341 ( 2010 ) .kleiber , m. body size and metabolism ., 315351 ( 1932 ) .west , g. b. allometric scaling of metabolic rate from molecules and mitochondria to cells and mammals ., 24732478 ( 2002 ) .makse , h. a. , havlin , s. , and stanley , h. e. modelling urban growth patterns ., 608612 ( 1995 ) .makse , h. a. , jr ., j. s. a. , batty , m. , havlin , s. , and stanley , h. e. modeling urban growth patterns with correlated percolation ., 70547062 ( 1998 ) .rozenfeld , h. d. , rybski , d. , , j. s. , batty , m. , stanley , h. e. , and makse , h. a. laws of population growth ., 1870218707 ( 2008 ) .gabaix , x. zipf s law for cities : an explanation ., 739767 ( 1999 ) .bettencourt , l. m. a. , lobo , j. , helbing , d. , kuhnert , c. , and west , g. b. growth , innovation , scaling , and the pace of life in cities ., 73017306 ( 2007 ) .batty , m. the size , scale , and shape of cities ., 769771 ( 2008 ) .rozenfeld , h. d. , rybski , d. , gabaix , x. , and makse , h. a. the area and population of cities : new insights from a different perspective on cities . , 22052225 ( 2011 ) .newman , m. e. j. power laws , pareto distributions and zipf s law ., 323351 ( 2005 ) .stanley , m. h. r. , buldyrev , s. v. , havlin , s. , mantegna , r. , salinger , m. , and stanley , h. e. zipf plots and the size distribution of firms . , 453457 ( 1995 ) .mantegna , r. n. , et al .systematic analysis of coding and noncoding dna sequences using methods of statistical linguistics ., 29392950 ( 1995 ) .clauset , a. , shalizi , c. r. , and newman , m. e. j. power - law distributions in empirical data ., 661703 ( 2009 ) .mandelbrot , b. on the theory of word frequencies and on related markovian models of discourse , in : r. jakobson , structure of language and its mathematical aspects ._ proceedings of symposia in applied mathematics _ * vol .xii * , 190219 ( 1961 ) .karlin , s. central limit theorems for certain infinite urn schemes ._ journal of mathematics and mechanics _ * 17 * , 373401 ( 1967 ) .gnedin , a. , hansen , b. , pitman , j. notes on the occupancy problem with infinitely many boxes : general asymptotics and power laws ._ probability surveys _ * 4 * , 146171 ( 2007 ) .van leijenhorst , d. c. , van der weide , th .p. a formal derivation of heaps law .sci . _ * 170 * , 263272 ( 2005 ) .l , l. , zhang , z - k . ,zhou , t. zipf s law leads to heaps law : analyzing their relation in finite - size systems . _plos one _ * 5 * , e14139 ( 2010 ) .steyvers , m. and tenenbaum , j. b. the large - scale structure of semantic networks : statistical analyses and a model of semantic growth . , 4178 ( 2005 ) .markosova , m. network model of human language ., 661666 ( 2008 ) .altmann , e. g. , pierrehumbert , j. b. , and motter , a. e. niche as a determinant of word fate in online groups ., e19009 ( 2011 ) .riccaboni , m. , pammolli , f. , buldyrev , s. v. , ponta , l. , and stanley , h. e. the size variance relationship of business firm growth rates . , 1959519600 ( 2008 ) .oehlert , g. w. a note on the delta method ._ the american statistician _ * 46 * , 2729 ( 1992 ) .amaral , l. a. n. , et al . scaling behavior in economics :i. empirical results for company growth ._ j. phys .i france _ * 7 * , 621633 ( 1997 ) .amaral , l. a. n. , et al .power law scaling for a system of interacting units with complex internal structure .* 80 * , 13851388 ( 1998 ) .fu , d. , pammolli , f. , buldyrev , s. v. , riccaboni , m. , matia , k. , yamasaki , k. , and stanley , h. e. the growth of business firms : theoretical framework and empirical evidence . , 1880118806 ( 2005 ) .podobnik , b. , horvatic , d. , petersen , a. m. , stanley , h. e. quantitative relations between risk , return , and firm size ._ epl _ * 85 * , 50003 ( 2009 ) .podobnik , b. , horvatic , d. , petersen , a. m. , njavro , m. , stanley , h. e. common scaling behavior in finance and macroeconomics .j. b _ * 76 * , 487490 ( 2010 ) ._ epl _ * 85 * , 50003 ( 2009 ) .mufwene , s. .( cambridge univ . press , cambridge , uk , 2001 ) .mufwene , s. .( continuum international publishing group , new york , ny , 2008 ) .perc , m. evolution of the most common english words and phrases over the centuries ._ j. r. soc .interface _ * 9 * , 33233328 ( 2012 ) . sigman , m. and cecchi , g. a. global organization of the wordnet lexicon . ,17421747 ( 2002 ) . , e. , dorow , b. , eckmann , j .-p . , and moses , e. hierarchical structures induce long - range dynamical correlations in written texts ., 79567961 ( 2006 ) . , e. a. , cristadoro , g. , and esposti , m. d. on the origin of long - range correlations in texts ., 1158211587 ( 2012 ) .montemurro , m. a. and pury , p. a. long - range fractal correlations in literary corpora ., 451461 ( 2002 ) .corral , a. , , r. , and , a. universal complex structures in written language . : 0901.2924 ( 2009 ) .altmann , e. g. , pierrehumbert , j. b. , and motter , a. e. beyond word frequency : bursts , lulls , and scaling in the temporal distributions of words . , e7678 ( 2009 ) .amp acknowledges support from the imt lucca foundation .jt , sh and he s acknowledge support from the dtra , onr , the european epiwork and linc projects , and the israel science foundation .mp acknowledges support from the slovenian research agency .a. m. p. , j. t. , s. h. , h. e. s. , & mp designed research , performed research , wrote , reviewed and approved the manuscript .a. m. p. performed the numerical and statistical analysis of the data .
we analyze the occurrence frequencies of over 15 million words recorded in millions of books published during the past two centuries in seven different languages . for all languages and chronological subsets of the data we confirm that two scaling regimes characterize the word frequency distributions , with only the more common words obeying the classic zipf law . using corpora of unprecedented size , we test the allometric scaling relation between the corpus size and the vocabulary size of growing languages to demonstrate a decreasing marginal need for new words , a feature that is likely related to the underlying correlations between words . we calculate the annual growth fluctuations of word use which has a decreasing trend as the corpus size increases , indicating a slowdown in linguistic evolution following language expansion . this `` cooling pattern '' forms the basis of a third statistical regularity , which unlike the zipf and the heaps law , is dynamical in nature . books in libraries and attics around the world constitute an immense `` crowd - sourced '' historical record that traces the evolution of culture back beyond the limits of oral history . however , the disaggregation of written language into individual books makes the longitudinal analysis of language a difficult open problem . to this end , the book digitization project at _ google _ inc . presents a monumental step forward providing an enormous , publicly accessible , collection of written language in the form of the _ google books ngram viewer _ web application . approximately 4% of all books ever published have been scanned , making available over occurrence time series ( word - use trajectories ) that archive cultural dynamics in seven different languages over a period of more than two centuries . this dataset highlights the utility of open `` big data , '' which is the gateway to `` metaknowledge '' , the knowledge about knowledge . a digital data deluge is sustaining extensive interdisciplinary research efforts towards quantitative insights into the social and natural sciences . `` culturomics , '' the use of high - throughput data for the purpose of studying human culture , is a promising new empirical platform for gaining insight into subjects ranging from political history to epidemiology . as first demonstrated by michel et al . , the _ google _ n - gram dataset is well - suited for examining the microscopic properties of an entire language ecosystem . using this dataset to analyze the growth patterns of individual word frequencies , petersen et al . recently identified tipping points in the life trajectory of new words , statistical patterns that govern the fluctuations in word use , and quantitative measures for cultural memory . the statistical properties of cultural memory , derived from the quantitative analysis of individual word - use trajectories , were also investigated by gao et al . , who found that words describing social phenomena tend to have different long - range correlations than words describing natural phenomena . here we study the growth and evolution of written language by analyzing the macroscopic scaling patterns that characterize word - use . using the _ google _ 1-gram data collected at the 1-year time resolution over the period 1800 - 2008 , we quantify the annual fluctuation scale of words within a given corpora and show that languages can be said to `` cool by expansion . '' this effect constitutes a dynamic law , in contrast to the static laws of zipf and heaps which are founded upon snapshots of single texts . the zipf law , quantifying the distribution of word frequencies , and the heaps law , relating the size of a corpus to the vocabulary size of that corpus , are classic paradigms that capture many complexities of language in remarkably simple statistical patterns . while these laws have been exhaustively tested on relatively small snapshots of empirical data , here we test the validity of these laws using extremely large corpora . interestingly , we observe two scaling regimes in the probability density functions of word usage , with the zipf law holding only for the set of more frequently used words , referred to as the `` kernel lexicon '' by ferrer i cancho et al . . the word frequency distribution for the rarely used words constituting the `` unlimited lexicon '' obeys a distinct scaling law , suggesting that rare words belong to a distinct class . this `` unlimited lexicon '' is populated by highly technical words , new words , numbers , spelling variants of kernel words , and optical character recognition ( ocr ) errors . many new words start in relative obscurity , and their eventual importance can be under - appreciated by their initial frequency . this fact is closely related to the information cost of introducing new words and concepts . for single topical texts , heaps observed that the vocabulary size exhibits sub - linear growth with document size . extending this concept to entire corpora , we find a scaling relation that indicates a decreasing `` marginal need '' for new words which are the manifestation of cultural evolution and the seeds for language growth . we introduce a pruning method to study the role of infrequent words on the allometric scaling properties of language . by studying progressively smaller sets of the kernel lexicon we can better understand the marginal utility of the core words . the pattern that arises for all languages analyzed provides insight into the intrinsic dependency structure between words . the correlations in word use can also be author and topic dependent . bernhardsson et al . recently introduced the `` metabook '' concept , according to which word - frequency structures are author - specific : the word - frequency characteristics of a random excerpt from a compilation of everything that a specific author could ever conceivably write ( his / her `` metabook '' ) should accurately match those of the author s actual writings . it is not immediately obvious whether a compilation of all the metabooks of all authors would still conform to the zipf law and the heaps law . the immense size and time span of the _ google _ n - gram dataset allows us to examine this question in detail .
software quality costs and economics have been subject to research for decades now .consequently , there is a variety of corresponding models on all levels of abstraction as a result of this research .the development and improvement of these models is important , especially for the decision makers in real software projects .this becomes obvious when considering that there are many estimates that assign 3050% of the development costs to quality assurance .a newer study of the national institute of standards and technology of the united states found that even 80% of the development costs are caused by the detection and removal of defects .hence , models are needed to control and minimise these costs . yet , for this to be feasible we need to incorporate them into existing development processes .thereby , we make them operational and accessible for decision - makers .most models are not directly applicable in a real development process .they often only classify the relevant costs but do not show how to use this classification .even operational models often neglect the fact that they need to be used in the context of a specific process model .the usage of such models is mainly in an ad - hoc manner and they are not systematically included in process models .the contribution lies in the seamless integration of a model for analytical software quality assurance into the existing process model v - modell xt .we show how our qa model is operationally used and which roles , products , and activities are involved in using the model in practice .this allows an easy adoption of the model for a project that follows the v - modell xt .although similar ad - hoc usages of such models are practice in some companies , we are not aware of an earlier systematic integration .first , we introduce quality economics in general and in terms of the analytical model in sec .[ sec : model ] . in sec .[ sec : vmodell ] we describe the basics of the considered process model and its underlying meta - model .[ sec : integration ] then shows the integration of the model with the v - modell xt .we finish with related work in sec . [sec : related ] and final conclusions in sec .[ sec : conclusions ] .we first describe the cost types and other factors that are important in the context of the economics of analytical quality assurance .then we give a short overview of the analytical model from that is to be integrated in the process model v - modell xt later .we reduce the classical paf ( prevention , appraisal , failure ) model of quality costs to an af ( appraisal , failure ) model .we ignore _ prevention costs _ that contain the costs of preventing defects by constructive qa because constructive qa has significantly different characteristics . _appraisal costs _contain all costs for checking artefacts to detect defects , e.g. , test specification and execution .the debugging is then part of the failure costs .when the failure occurs in - house it incurs _ internal failure costs_. failures during operation at the customer cause _ external failure costs_. we refine these parts so that we can identify the relevant cost factors . the complete refined model is shown in fig . [ fig : costs_revisited ] .the appraisal costs are detailed to _ setup _ and _ execution _ costs .the former constituting all initial costs for buying test tools , configuring the test environment , and so on .the latter includes costs that are connected to actual test executions or review meetings , mainly personnel costs . on the nonconformance side , we have _ fault removal _ costs that can be attributed to the internal failure costs as well as the external failure costs .this is because if we found a fault and wanted to remove it , it would always result in costs no matter whether caused by an internal or external failure .actually , there does not have to be a failure at all .considering code inspections , faults are found and removed that have never caused a failure during testing .fault removal costs also contain the costs for necessary re - testing and re - inspections .external failures also cause _ effect _ costs . those are all further costs associated with the failure apart from the removal costs .for example , compensation costs could be part of the effect costs , if the failure caused some kind of damage at the customer site .we might also include other costs such as loss of sales because of bad reputation in the effect costs . furthermore , there are also technical factors that are important for the quality economics of analytical quality assurance .the two main factors that we consider in the following model are ( 1 ) the difficulty of defect - detection and ( 2 ) the failure probability of faults .we denote the probability that a specific defect - detection technique does not detect a defect of a specific type as its difficulty .this factor has been shown to be influential . a smaller but still substantial impact stems from the failure probability of faults .this is important because many faults occur only with a very small probability during operation .we use the stochastic quality assurance model from as the model to be integrated in the v - modell xt .actually , we only consider the _ practical _ model from this work because that is the one to be applied .it is derived from a _ theoretical _ model that incorporates more factors and more detail . however , the main factors described above are still contained in this model . the main idea of the model is to compute the expected values of the costs and benefits of quality assurance . for this purpose , they are structured in three components : direct costs , future costs and revenues .the determination of the expected values is based on average values calculated from literature and finished projects . during the project, measured data can be used to refine the results .we define to be the defect type of fault .it is determined using the defect type distribution of older projects . in this waywe do not have to look at individual faults but analyse and measure defect types for which the determination of interesting quantities is possible during quality assurance .we will not further elaborate the concept of defect types but refer to defect classification approaches from ibm or hp . for the sake of a simple presentation , we first give equations for a single defect - detection technique and generalise that to a combination of techniques .we start with the direct costs of a defect - detection technique .they are all costs that occur directly by using the technique .}= { u_a}+ { e_a(t_a)}+ \sum_{i } { ( 1 - { \theta_a({\tau_i},t_a ) } ) { v_a({\tau_i})}},\ ] ] where is the average setup cost for technique , is the average execution cost for with effort , and is the average removal cost for defect type .the future costs are those costs that will occur when defects are not detect by the technique . }= \sum_i{\pi_{\tau_i } { \theta_a({\tau_i},t_a)}({v_f({\tau_i})}+ { f_f({\tau_i})})}.\ ] ] finally , the revenues are the saved future costs , i.e. , the costs that will not incur because the technique finds them .}= \sum_i{\pi_{\tau_i } ( 1 - { \theta_a({\tau_i},t_a)})({v_f({\tau_i})}+ { f_f({\tau_i})})},\ ] ] where is the average effect costs of a fault of type .the extension to more than one technique needs to consider whether the defects have been found by earlier used techniques .the following is the equation for the expected value of the direct costs : = \sum_{x \in x}{\biggl [ } & { u_x}+ { e_x(t_x)}+ \sum_i{\bigl [ ( 1 - \theta_x({\tau_i},t_x ) ) } \\ & \cdot \prod_{y < x}{\bigl ( \theta_y({\tau_i},t_y ) } \bigr ) { v_x({\tau_i})}\bigr ] \biggr ] , \end{split}\ ] ] where is the ordered set of the used defect - detection techniques .also the expected value of the combined future costs can be formulated in the practical model using defect types . = \sum_i{\biggl [ } & \pi_{{\tau_i } } \prod_{x \in x}{\bigl(\theta_x({\tau_i},t_x ) \bigr ) } \\ & \cdot \bigl ( { v_f({\tau_i})}+ { f_f({\tau_i})}\bigr ) \biggr ] \end{split}\ ] ] finally , the expected value of the combined revenues are defined accordingly . = \sum_{x \in x}{\sum_i{\biggl [ } } & \pi_{{\tau_i } } ( 1 - \theta_x({\tau_i},t_x ) ) \prod_{y < x}{\bigl ( \theta_y({\tau_i},t_y ) } \bigr ) \\ & \cdot \bigl ( { v_f({\tau_i})}+ { f_f({\tau_i})}\bigr ) \biggr ] \end{split}\ ] ] using the practical model , we identify only seven different types of quantities that are needed to use the model : * estimated number of faults : * distribution of defect types * difficulty functions for each technique and type * average removal costs for type with technique : * average removal costs for type in the field : * average effect costs for type in the field : * failure probability of fault of type : for an early application of the model , average values from a literature review can be used as first estimates .we did an extensive analysis of those values in and ranked them using sensitivity analysis in . for more specific estimationswe can use more sophisticated methods : the coqualmo model allows to determine an estimate of the number of faults contained in the software .the defect removal effort for different defect types can be predicted using an association mining approach of song et al . .optimisation is the key to using the model operationally in a project .it allows to calculate an optimal effort distribution for the used defect - detection techniques .only two of the three components of the model are needed because the future costs and the revenues are dependent on each other .there is a specific number of faults that have associated costs when they occur in the field .these costs are divided in the two parts that are associated with the revenues and the future costs , respectively .the total always stays the same , only the size of the parts varies depending on the defect - detection technique .therefore , we use only the direct costs and the revenues for optimisation and consider the future costs to be dependent on the revenues .hence , the optimisation problem can be stated by : maximise . by using eq .[ eq : practical_direct_combined ] and eq .[ eq : practical_revenues_combined ] we get the following term to be maximised .\end{split}\ ] ] for the optimisation purposes , we usually also have some restrictions , for example a maximum effort with , either fixed length or none , or some fixed orderings of techniques , that have to be taken into account . the latter is typically true for different forms of testing as system tests are always later in the development than unit tests .we assume there is sufficient tool - support available to solve this optimisation problemthe v - modell xt is a recently released german software and system development standard .it covers all relevant management , engineering and supporting processes of software development , for instance project management , quality assurance , offer , bidding and contract management , and also technical disciplines such as requirements engineering , system design and integration , software development and more specific engineering activities .the goals of the v - modell xt are to provide a generic development process model , which is easy to understand and to use , flexibly adaptable to the needs of organisations and projects , and reproducibly leading to developed products of higher quality with less cost and resources spent . in order to extend the v - modell xt it is imperative to know its main concepts .the v - modell xt is based on a rigorous meta - model , which defines all concepts and their relationships .the entire process model strictly follows this meta - model , which is a prerequisite for flexible extensibility .the main concepts of the v - modell xt are : * _ work products _ are the main project results and artefacts ( documents , models , code , deliverable systems ) .they have a defined structure and prescribed content , and can be structured further into specific subjects ( sub - sections ) .work products have a responsible creator and will be quality checked .an evaluation specification defines the requirements for their quality . *_ product dependencies _ define the consistency relations between the contents of different work products . adhering to and checking product dependenciesmakes sure that all work products in a project will be created and kept consistent to existing products .product dependencies are an important means to assure overall product quality and to trace information across products , for instance from requirements to software architecture elements . *_ activities _ define the actions that need to be performed in order to create the work products .activities can be structured further into sub - activities .activities provide support for the actual doing within a project .there is exactly one activity per work product . in casethere are multiple instances or iterations of a certain work product , the respective activity is performed several times , accordingly . each activity creates a work product ; therefore it is directly followed by a product evaluation for qa , if required . * _ roles _ describe profiles of responsibility for the people working in the project .roles will be impersonated by specific people in the project . * _ process modules _ group together work products , activities and roles , as well as other v - modell xt elements into self - contained units covering certain project processes , such as project management , requirements management , systems integration , software development , etc .process modules have a hierarchy of dependence .they can be understood , applied and modified independently if adhering to these dependencies .process modules are the main units of tailoring and extension of the v - modell xt .important for the v - modell xt extension mechanism is that process modules can define work products , subjects , activities , sub - activities etc .that modularly and seamlessly extend existing processes of the core process modules .* _ tailoring _ is the process of adapting the v - modell xt to a specific project or organisation .tailoring consists of selecting the appropriate process modules out of the repository of available ones .after the tailoring , a consistent and adapted software development process exists .it is indistinguishable to the user from which process modules which specific elements work products , their subjects ,extensions etc came from .quality assurance is a cornerstone within a process model such as the v - modell xt .it is the main means to ensure result quality by constructive and analytical means .the qa mechanisms of the v - modell xt are manifold and cover the following areas : 1 ._ organisational quality management ._ organisational units define general quality standards , guidelines and metrics to be applied in projects .they also archive results in metric catalogues to be used by subsequent projects in order to continuously improve the process on the organisational level .the v - modell xt does not define these processes but provides an interface to them .the responsible role is the _ quality manager_. 2 ._ project setup ._ during project setup , the _ project manual _ and the _ qa manual _ are created which define standards , general rules and guidelines that must be applied during the project .the _ qa manual _ in particular defines the constructive and analytical methods that are applied for each different type of project result to ensure the desired product quality .responsible for these tasks are the _ project leader _ and the _qa manager_. 3 ._ product evaluation ._ project results are evaluated according to a product specific evaluation specification , following an evaluation procedure .the means applied for evaluating a certain product vary depending on its importance and criticality ._ qa reports _ are created in defined intervals describing the state and potential quality problems of the product under construction based on the results of the product evaluations ._ evaluators _ are responsible for evaluations and the _ qa manager _ for the reports ._ project management ._ the project management decides about general project progress based on project and _ qa reports_. in case of need , it applies corrective means to increase product quality .project management reports to the customer and higher levels in the organisation ; it is responsible for the overall project result quality as well as for observing agreed upon delivery deadlines and costs .the responsible role is the _ project leader_. the v - modell xt already contains an optional process module named `` measurement and analysis '' .it is very lightweight and only provides the ideas of applying quality analysing metrics within a project .this process module depends only on the core process module of `` project management '' .besides , it does not have any other dependencies , and does not extend the general process module `` quality assurance '' . in order to keep the compliance level with the standard v - modell xt as high as possible , we replace the existing `` measurement and analysis '' process module with our own one .however , we reuse the elements defined in the existing process module and complement them with new elements and extended content .in this section we show the exemplary embedding of our model in the process model _ v - modell xt _ , as described in the previous section .an integration with other process models such as the rup can be done accordingly . for the v - modell xt integration, we define a new process module `` measurement and analysis of analytical qa '' containing roles , products , and activities .we first give a brief overview of the general analytical qa process and then describe the contents of the process module in more detail .the diagrams shown in fig .[ fig : activity1 ] and fig .[ fig : activity2 ] give an overview of the usage of our model as part of the v - modell xt .we relate the activities to the different qa processes of the v - modell xt described in sec .[ sec : vmqa ] . 1 .the _ quality manager _ is responsible for defining and documenting cross - project quality standards , metrics , and methods , within the _ quality management manual_. furthermore , he defines and maintains the _ metrics catalogue _ that must contain all the necessary input factors summarised in sec .[ sec : needed ] .quality manager _ must also provide an infrastructure that is able to store the corresponding metrics .2 . within a project ,the _ project leader _ is responsible for setting up the project initially .the _ project leader _ performs basic estimations for the project including the defect estimate needed for our model .he can get supporting data from similar projects using the _metrics infrastructure_. based on these estimates , he uses our model to calculate an optimised quality assurance and he documents this as part of the _qa manual_. also during project setup , the _ qa manager _ is responsible for implementing the guidelines from the _ quality management manual _ within the project . in collaboration with the _ project leader _ he defines the _ qa manual _ and provides input for the qa relevant sections of the _ projectmanual_. the project setup results in a completed _ project manual _ where project goals and guidelines for supporting project processes , such as measurement and analysis are defined .during the course of the project , _ evaluators _ are performing the product qa tasks . depending on the specific requirements for specific work productsas expressed in the _ qa manual _ , they are preparing _ evaluation specifications _ and _ evaluation procedures _ for each evaluated product . the _ project plan _ , under responsibility of project management ,plans the occurrences of the product evaluations and qa measures . according to these guidelines, the _ evaluators _ perform the actual product evaluations and document the results in _ evaluation reports _ , which include all measurements that can be later used in our model such as the number and type of the detected defects . in regular intervals , the _ qa manager_ compiles _ quality status reports _ out of the product _evaluation reports_. 4 .the _ project leader _ and project management are continuously assessing project progress and are responsible for making regular project progress decisions , which act as `` quality gates ''. one important source of input for these decisions are the _ quality status reports_. project management can use the available information to apply our model to evaluate different scenarios and optimise the further qa strategy .the results will refine the _ project plan _ and might lead to an update of the _qa manual_. when the project is finished , the project leader collects the _ measurement data _ relevant for our model and forwards it to the _ quality manager _ who stores it in the metrics infrastructure . in the following we show in greater detail the extensions to the v - modell xt we made to embed our analytical qa modelthe v - modell xt was created as a generic process model which already puts significant effort on qa and related management processes .it thus already provides the basic framework for analytical qa . to fit our model, we have to extend existing elements of the v - modell and add a few new ones . mainly this is only more detail on the explicit measurement and collection of the data for the model input factors . for our extension , we concentrate on products , activities and roles .the integration of our analytical quality model affects the responsibilities of 4 roles that already exist in the v - modell xt .their role profile descriptions are sufficiently abstract and fit our purposes .thus , only slight extensions need to be performed : * the _ quality manager _ is responsible for quality assurance standards across all projects and for an efficient and effective quality management system . in particular , he develops a systematic quality management and creates and maintains the _ quality management manual_. most importantly in our context , he defines rules and approaches how projects plan and perform quality assurance techniques .furthermore , he defines which qa techniques should be used in general and helps in choosing appropriate techniques for a specific project .we add that he is responsible for setting up and maintaining the _ metrics infrastructure_. * the _ project leader _ , as the leader of the project s execution , plans and controls the project s progress . in particular, he makes the basic estimates for project planning and decides on future changes based on status reports .the main extension for this role is that he uses our model for optimising the resource distribution for quality assurance and also collects the necessary measurement data for our model . *the _ qa manager _ controls the quality in a project and thereby supervises all quality assurance .he is responsible for the _ quality status reports _ and also plans the qa work in collaboration with others .there is only the small addition that in his _ quality status report _ the necessary measurements for the model must be contained .* the _ evaluator _ also called _ inspector _ although he not only uses inspections creates evaluation specifications and using those evaluates the artefacts created in the project .hence , he uses defect - detection techniques , e.g. , reviews and tests , on those artefacts and reports the results . also for the _ evaluator_ it is necessary that he documents the necessary measurements for the model factors .work products are the main v - modell elements and also the core project results .work products have one responsible role .the following list shows the work products that need to be considered , extended or added to apply our model : * the _ quality management manual _ is a work product that we add to capture among other subjects organisation - wide definitions of the metrics that need to be collected for the usage of the model .metrics definitions _ is one subject in this product .responsible role for this product is the _quality manager_. the v - modell xt mentions such a document but does not officially introduce it . * we also add the _ metrics catalog _ , which exists in the v - modell xt only as subject of an organisation - wide process model , for process adaptation and improvement ( org ) projects .we reuse the subject description and establish it as a full product under the responsibility of the _quality manager_. we additionally explicitly require the incorporation of the factors from sec . [ sec : needed ] .thereby , we can reference this product in regular development projects . * the _ metrics infrastructure _is the third new work product under the responsibility of the _ quality manager_. it is in essence similar to the existing _ project management infrastructure _ but not project specific . in our context it needs to store the measured data for the relevant metrics of our model and provide access to it across projects over an extended period of time .* we extend the existing product _ estimation _ with a new subject _ estimation of the defect content _ that contains data , which we use later in our model .responsible is the _ project leader_. * we extend the existing products _ evaluation report _ and _ quality status report _ with new subjects that will contain the necessary measurement data for the factors of sec .[ sec : needed ] responsible are _ evaluator _ resp . _qa manager _ roles .* we use the existing work product _ measurement data _ to capture all data that is collected in the course of the project for calculating the relevant metrics of our model .responsible for this product is the _ project leader_. * the _ metrics analysis _ is another existing work product under the responsibility of the _ project leader_. it contains detailed analyses of the relevant metrics of our model based on the previously measured data .each work product has exactly one associated activity . during execution of such an activity one instance or iteration of the work productis created or edited .after the completion of the activity , a product evaluation is performed according the qa guidelines .the following list explains the relevant activities and sub - activities to apply our model .most are related to the given work products above : * we introduce a new activity _ preparing and maintaining quality management manual _ doing what its name says . a sub - activity describes how the necessary metrics for applying our model are selected and defined . *we introduce the existing sub - activity _ preparing and maintaining metrics catalogue _ of the v - modell xt as full activity creating and maintaining the product_ metrics catalog_. * the new activity _ setting up and maintaining the metrics infrastructure _ will make sure that a data repository is available for the measurement data . *the main element of our analytical quality assurance model is introduced into the v - model xt as new sub - activity _ optimise qa_. it belongs to the activity _ preparing the qa manual_. this sub - activity is performed by the _ project leader _ with help from the _ qa manager _ based on his estimates and data from similar projects .he calibrates the model and optimises it ( w.r.t .cost or roi ) so that an optimal resource distribution is found .this is then documented in the _qa manual_. * the activity _ collecting measurement data _ describes how the resp .product is created and edited . a new sub - activity activity _archiving measurement data _ requires that the measurement data will be stored during the project and at its end so that they are available across projects and for new projects .* the existing activity _ calculating and analysing metrics _ is extended with a new sub - activity to extract the data that is to be stored for future projects in the _ metrics infrastructure_. * we add to the activity _ coming to a project progress decision _ a new sub - activity which uses our economics model as basis for the decision .different scenarios can be analysed and an optimal effort ( or resource ) distribution can be calculated .we package all above described new work products and activities as well as product / activity extensions in form of subjects and sub - activities as part of our new process module .we add general descriptions and an overview of the process module contents .thereby we have performed a fully modular extension of the v - model xt that a _ project leader _ can choose or not choose to apply during initial v - modell tailoring for a new project .in summary , we find that our model blends well with the v - modell xt . the necessary changes and additions to use the model fit with the existing structure and require only additions and slight extensions of existing v - modell xt elements .they all can be packages nicely as a process module .an embedding into other process models with a similar structuring should be possible with a comparable effort .one of the challenges of integrating our analytical quality assurance model into the v - modell xt is the scope of the described process .our model covers activities on both the organisational level spanning multiple projects and the project level .the main focus of the v - modell xt is to describe a process for conducting a particular project : once a project is initiated and the decision to use the v - modell xt is made , the tailoring activity will result in a project specific process .the v - modell xt does not specifically cover organisation - wide processes , such as quality management and continuous process improvement .however , this is not a limitation , because it can easily be extended we have done this for our integration .this results in a responsibility of the organisation to apply the v - modell xt process across projects and to provide the necessary infrastructure .a consequence of our modular extension of the v - modell is the sometimes artificial separation between project specific and organisation specific work product ( and activity ) definitions , such as the _ project infrastructure _ and the _ metrics infrastructure_. additionally , the existing v - modell xt process support for the _ introduction and maintenance of organisation - specific process models _ the process module org covers part of the organisation - wide activities without being combinable and integrated with project specific processes .thus , we chose to introduce some redundancy for the sake of modularity and clear understandability. one of the prerequisites of our approach is the necessity to have comparable projects which yield expressive metrics data .the higher the degree of similarity between the projects and the better the measured metrics data , the more precise will our model be able to optimise the qa processes .general efficiency models of defect - detection techniques such as the inspection model of kusumoto et al . or the testing model of morasca and serra - capizzano are aiming at analysing specific techniques and their application .however , they are typically not usable for planning purposes in a software project .cost models based on reliability models , e.g. , pham , aim to decide when to stop testing .however , they are only applicable to the system testing phase . more economic - oriented models such as idave boehm et al . or the model of slaughter , harter , and krishnan are typically more abstract or coarse - grained than the used model. moreover , the question when and how to use those models is not completely clear .especially , we are not aware of an integration into an existing process model . punter et al . aim also at a practical application of product evaluation with specific goals . however , they concentrate more on the actual evaluation process using the iso 14598 standard which we assume given by the v - modell xt. they also do not explicitly discuss the aim of using the evaluation results for future optimisations .cai et al . propose a method of optimal and adaptive testing with cost constraints .they discuss that it is effective to adapt testing and to explore the interplay between software and control .however , their model does only consider testing and is not explicitly integrated in a complete process model .ambler uses process patterns in to describe task - specific self - contained pieces of processes and workflows in a reusable way .such patterns can be applied to solve complex tasks when needed .strrle shows how process patterns can be described in great detail using uml .the idea of process patterns is further refined by gnatz et al . in form of a modular and extensible software development process based on collections of independent process components .these process patterns essentially are the basis of the extension mechanism of the v - modell xt .analytical models of quality assurance would be a valuable tool for project managers and other decision - makers in software projects .there is a variety of such models available on different levels of abstraction .however , the adoption in practice is still weak .one main problem is that the usage of those model is often not clear .especially , when and how the model should be used in an existing process model is typically not specified by the model proposers . in this paper , we show the exemplary integration of a detailed model of analytical quality assurance in the process model v - modell xt .we are not aware of other models of quality assurance that have explicitly been integrated into an existing process model .the benefits of this work are two - fold : ( 1 ) organisations that follow already the v - modell xt have now simple means to also incorporate the analytical model into their process .( 2 ) it has been shown that such an integration can be done relatively simple and with little effort .therefore , this should be also possible with other process models and hence the usage of models of qa can be increased . for future work , we consider tool support as the other important aspect of pushing the use of such models in software organisations .hence , we plan to build an easy to use tool implementation that helps in applying the model .it is also to investigate whether our claim w.r.t .the easy integration into other process models really holds .we are grateful to ulrike hammerschall for the discussion on a first integration .
economic models of quality assurance can be an important tool for decision - makers in software development projects . they enable to base quality assurance planning on economical factors of the product and the used defect - detection techniques . a variety of such models has been proposed but many are too abstract to be used in practice . furthermore , even the more concrete models lack an integration with existing software development process models to increase their applicability . this paper describes an integration of a thorough stochastic model of the economics of analytical quality assurance with the systems development process model v - modell xt . the integration is done in a modular way by providing a new process module a concept directly available in the v - modell xt for extension purposes related to analytical quality assurance . in particular , we describe the work products , roles , and activities defined in our new process module and their effects on existing v - modell xt elements .
more and more , astronomical research is being performed remotely , in the sense that the observer , or perhaps more properly `` data analyst '' , is now often not present at the place or time at which observations are taken .the increase in remoteness has several causes .one is that for many observatories , telecommunication is easier than travel , especially if telescope allocations are of short durations .another is that several new telescopes are using or plan to use queue - scheduling ( eg , contributions to ; and , , , ) , for which observer travel is essentially impossible .some new ground - based telescopes are partially or completely robotic ( eg , contributions to , contributions to ; and , , , , ) .possibly the most important reason for the increase in remote observing is that many observatories , many large surveys , and some independent organizations are creating huge public data archives which allow analyses by anyone at any time ( eg , contributions to and ) .there has been community discussion of a `` national virtual observatory '' which might be a superset of these archives and surveys ( eg , contributions to ) .remote observing and archival data analysis bring huge economic and scientific benefits to astronomy , but with the significant cost that the observer does not have direct access to observing conditions at the site .most remote observatories and data archives keep logs written by telescope operators , but these logs are notoriously non - uniform in their attention to detail and use of terminology .all sites that plan to host remote observers or maintain public data archives must have repeatable , quantitative , astronomically relevant site monitoring .for these reasons , among others ( involving photometric calibration and direction of survey operations ) the sloan digital sky survey ( sdss ; ) , which is constructing a public database of of five - bandpass optical imaging and optical spectra , employs several pieces of hardware for monitoring of the apache point observatory site , including a - cloud - camera scanning the whole sky ( ) , a single - star atmospheric seeing monitor , a low - altitude dust particle counter , a basic weather station , and a 0.5-m telescope making photometric measurements of a large set of standard stars .this paper is about a fully automated software `` robot '' that reduces the raw 0.5-m telescope data , locates and measures standard stars , and determines atmospheric extinction in near - real time , reporting its findings back to telescope and survey operators via the world - wide web ( www ) .this paper describes the software robot , rather than the hardware , which will be the subject of a separate paper ( uomoto et al in preparation ) .although the robot is somewhat specialized to work with the hardware and data available at the apache point observatory , it could be generalized easily for different hardware .we are presenting it here because it might serve as a prototype for site monitors that ought to be part of any functional remote observing site and of the national virtual observatory .the primary telescope used in this study is the photometric telescope ( pt ) of the sdss , located at the apache point observatory ( apo ) in new mexico , at latitude 324649.30n , longitude 1054913.50w , and elevation 2788 m. the pt has a 20-in primary mirror and is outfitted with a pixel ccd with 1.16 arcsec pixels , making for a field of view .the telescope and ccd will be described in more detail elsewhere ( uomoto et al in preparation ) .the telescope takes images through five bandpasses , , , , , and , chosen to be close to those in the sdss 2.5-m imaging camera .filter wavelengths are given in table [ tab : bandpasses ] .the magnitude system here is based on an approximation to an ab system ( , ) , again because that was the choice for the sdss imaging .the photometric system will be described in more detail elsewhere ( smith et al in preparation ) .nothing about the function of the robot is tied to this photometric system ; any system can be used provided that there is a well - calibrated network of standard stars .site monitoring and measurement of atmospheric extinction is only one part of the function of the photometric telescope ( pt ) .the pt is being simultaneously used to take data on a very large number of `` secondary patches '' that will provide calibration information for the sdss imaging . for this reason, the pt spends only about one third of its time monitoring standard stars .its site monitoring and atmospheric extinction measuring functions could be improved significantly , in signal - to - noise and time resolution , if the pt were dedicated to these tasks .the observing plan for the night is generated automatically by a field `` autopicker '' which chooses standard star fields on the basis of ( a ) observability , ( b ) airmass coverage , ( c ) intrinsic stellar color coverage , and ( d ) number of calibrated stars per standard field .because the observing is very regular , with , , , , and images taken ( in that order ) of each field , it has been almost entirely automated .the only significant variation in observing from standard field to standard field is that different fields are imaged for different exposure times to avoid saturation .the raw data from the telescope is in the form of images in the flexible image transport system format ( fits ; ) .the raw image headers contain the filter ( bandpass ) used , the exposure time , the date and ut at which the exposure was taken , the approximate pointing of the telescope in ra and dec , and the type of exposure ( eg , bias , flat , standard - star field , or secondary patch ) .the robot makes use of much of this information , as described below .in principle , the photometricity monitoring software could run on the data - acquisition computer .however , in the interest of limiting stress on the real - time systems , the photometricity robot runs on a separate machine , obtaining its data by periodic executions of the unix ` rsync ` software .the ` rsync ` software performs file transfer across a network , using a remote - update protocol ( based on file checksums ) to ensure that it only transfers updated or new files , without duplicating effort .the output of ` rsync ` can be set to include a list of the files which were updated .this output is passed ( via scripts written in the unix shell language ` bash ` ) to a set of software tools written in the data - analysis language ` idl ` .virtually all of the image processing , data analysis , fitting , and feedback to the observer is executed with ` idl ` programs in ` bash ` wrappers .this combination of software has proven to be very stable and robust over many months of continuous operation .in addition , data reduction code written in ` idl ` is easy to develop and read .our only significant reservation is that ` idl ` is a commercial product which is not open - source .we have responded to this lack of transparency by writing as much as possible of the robot s function in terms of very low - level ` idl ` primitives , which could be re - written straightforwardly in any other data - analysis language if we lost confidence in or lost access to ` idl ` .we have not found the lack of transparency to be limiting for any of the functionality described in this paper .each raw image is bias - subtracted and flattened , using biases and flat - field information archived from the most recent photometric night . how the bias and flat images are computed from a night s datais described in section [ sec : nextday ] , along with the conditions on which a night is declared photometric . because the ccd is thinned for sensitivity in the bandpass , it exhibits interference fringing in the and bandpasses .the fringing pattern is very stable during a night and from night to night .the fringing is modeled as an additive distortion ; it is removed by subtracting a `` fringe image '' scaled by the dc sky level .how the fringe image is computed is described in section [ sec : nextday ] .the dc sky level is estimated by taking a mean of all pixels in the image , but with iterated `` sigma - clipping '' in which the rms pixel value is computed , 3.5-sigma outlier pixels are discarded , and the mean and rms are re - estimated .this process is iterated to convergence .the fringing correction is demonstrated in figure [ fig : fringe_demo ] .although in principle the fringing pattern may depend on the temperature or humidity of the night sky , it does not appear to vary significantly within a night , or even night - to - night .perhaps surprisingly , the fringing pattern appears fixed and its amplitude scales linearly with the dc level of the sky . for each corrected pt image , a mask of bad pixels is retained .the mask is the union of all pixels saturated in the raw images along with all pixels which are marked as anomalous in the bias image , flat image , or ( for the and bandpasses ) fringe image .how pixels are marked as anomalous in the bias , flat , and fringe is described in section [ sec : nextday ] .the individual bias - subtracted , flattened and ( for the and bandpasses ) fringe - corrected image will , from here on , be referred to as the `` corrected frames '' .the corrected frames are stored and saved by the robot in fits format .hardware pointing precision is not adequate for individual source identifications .the software robot determines the precise astrometry itself , by comparison with the usno - sa2.0 astrometric catalog ( ) .this catalog contains astrometric stars over most of the sky ; there are typically a few catalog stars inside a pt image . in brief, the astrometric solution for each image is found as follows : a sub - catalog is created from all astrometric stars within 30 arcmin of the field center ( as reported by the hardware in the raw image header ) .an implementation of the daophot software ( , ) in ` idl ` ( by private communication from w. landsman ) is used to locate all bright point sources in the corrected frame ( daophot is used for object - finding only , not photometry ) .each set of stars is treated as a set of delta - functions on the two - dimensional plane of the sky , and the cross - correlation image is constructed .if there is a statistically significant cluster of points in the cross - correlation image , it is treated as the offset between the two images .corresponding positions in the two sets of stars ( from the corrected frame and from the astrometric catalog ) are identified , and the offset , rotation , and non - linear radial distortion in the corrected frame are all fit , with iterative improvement of the correspondence between the two sets of stars . the precise astrometric information is stored in the fits header of each corrected frame in gsss format ( this format is not standard fits but is used by the hst guide star survey ; cf . , ) .our algorithm is much faster , albeit less general , than previous algorithms ( eg , ) . essentiallyall exposures of more than a few seconds in the , and bandpasses obtain correct astrometric solutions by this procedure .some short and exposures do not , and are picked up on a second pass using a mini - catalog constructed from stars detected in the and exposures of the same field . on most nights , all exposures in all bands obtain correct astrometric solutions .the algorithm and its implementation will be discussed in more detail in a separate paper , as it has applications in astronomy that go beyond this project .the robot measures and performs photometric fits with the stars in the sdss catalog of photometric standards ( smith et al in preparation ) .this catalog includes stars in the range mag , calibrated with the usno 40-inch ritchey - chrtien telescope .several of the standard - star fields used in this catalog are well - studied fields ( ) containing multiple standard stars spanning a range of magnitude and color . the photometric catalog is searched for photometric standard stars inside the boundaries of each corrected frame with precise astrometric header information . if any stars are found in the photometric catalog , aperture photometry is performed on the point source found at the astrometric location of each photometric standard star .the centers of the photometric apertures are tweaked with a centroiding procedure which allows for small ( arcsec ) inaccuracies in absolute astrometry .the aperture photometry is performed in focal - plane apertures of radii 4.63 , 7.43 , and 11.42 arcsec . the sky value is measured by taking the mean of the pixel values in an annulus of inner radius 28.20 and outer radius 44.21 arcsec , with iterated outlier rejection at 3 sigma .all of these angular radii are chosen to match those used by the sdss photo software ( , lupton et al in preparation ) . in what follows ,the 7.43-arcsec - radius aperture photometry is used .this aperture was chosen from among the three for showing , on nights of typical seeing , the lowest - scatter photometric solutions .this is because , in practice , the 7.43-arcsec - radius aperture is roughly the correct trade - off between individual measurement signal - to - noise ( which favors small apertures ) and insensitivity to spatial or temporal variations in the point - spread function ( which favors large ) .the pt shows significant , repeatable , systematic distortions of the point - spread function across the field of view ; a more sophisticated robot would model and correct these distortions ; for our purposes it is sufficient to simply choose the relatively large 7.43-arcsec - radius aperture .each photometric measurement is corrected for its location in the field of view of the pt .there are two corrections .the first is an illumination correction derived from the radial distortion of the field as found in the precise astrometric solution ( section [ sec : astrom ] ) .the illumination correction is designed to account for the fact that photometrically we are interested in fluxes , but the flatfield is measured with the sky ; ie , the flatfield is made to correct pixels to a constant surface - brightness sensitivity rather than a constant flux sensitivity .because of optical distortions , pixels near the edge of the ccd see a different solid angle than pixels near the center .empirically , the dominant field distortion appears radial , so no attempt has been made to correct for illumination variation from arbitrary distortions .the illumination correction reaches a maximum of mag at the field edges and mag at the field corners .the second correction is related to the fringing in the and bandpasses .the ccd is thinned , but not precisely uniformly ; the dominant thinning gradients are radial . because of reflections internal to the ccd , gradients in ccd thickness lead to gradients in the fraction of point - source light scattered out of the seeing core and into the sky level .since the flat - field is computed on the basis of sky level , these gradients are seen as residuals in photometry .radial photometry corrections for the and bandpasses were found by performing empirical fits to photometry residuals ; they are applied to the and bandpass photometry .these `` thinning '' corrections reach a maximum of \sim [ 0.02,0.05] ] mag at the field corners .the `` prototype cloud camera '' ( ) operating at apo utilizes a single - pixel cooled detector and scanning mirrors .the sky is scanned by two flat mirrors driven by stepper motors , followed by an off - axis hyperbolic mirror that images the sky onto a single channel hgcdte photoconductive detector .the detector samples 300 times per scan , 300 scans per image , yielding an image with pixels covering a field with the deg beam .some typical images are shown in [ fig : cloudcam ] .an image is completed in approximately 5 min .this is somewhat slow for real - time monitoring , but perfectly adequate for our purposes .this design was prefered over a solid state array for reasons of price , stability , and field of view .a disadvantage is the maintenance required by moving parts , but this has not been a serious drawback .experimentation showed that a simple and adequate method for detecting cloud cover is to compute the rms value of the sky within 45 deg of zenith in each frame .this seems to be a simple and robust method , and it fails only in the case of unnaturally uniform cloud cover ( this has never occurred ) . when the cloud - camera rms exceeds a predefined threshold for a period of time , that period is declared bad , and the data taken during that interval are ignored for photometric parameter fitting .the bad interval is padded by 20 min on each end , so that even a single cloud appearing in one frame requires discarding at least 40 min of data .this is conservative , but it is more robust to set the cloud threshold high and reject significant time intervals than to make the threshold extremely low .every time the pt completes a set of exposures in a field , the robot compiles all the measurements made of photometric standard stars in that night and fits photometric parameters to all data not declared bad by the cloud - camera veto ( section [ sec : cloudveto ] ) .the photometric equations used for the five bandpasses are \,[(u - g)_\mathrm{usno}-(u - g)_0 ] \nonumber \\ & & + \dot{a}_u\,[t - t_0 ] ] \nonumber \\ g'_\mathrm{inst } & = & g_\mathrm{usno } + a_g + b_g\,(g - r)_\mathrm{usno } + k_g\,x \nonumber \\ & & + c_g\,[x - x_0]\,[(g - r)_\mathrm{usno}-(g - r)_0 ] \nonumber \\ & & + \dot{a}_g\,[t - t_0 ] ] \nonumber \\ r'_\mathrm{inst } & = & r_\mathrm{usno } + a_r + b_r\,(r - i)_\mathrm{usno } + k_r\,x \nonumber \\ & & + c_r\,[x - x_0]\,[(r - i)_\mathrm{usno}-(r - i)_0 ] \nonumber \\ & & + \dot{a}_r\,[t - t_0 ] ] \nonumber \\ i'_\mathrm{inst } & = & i_\mathrm{usno } + a_i + b_i\,(i - z)_\mathrm{usno } + k_i\,x \nonumber \\ & & + c_i\,[x - x_0]\,[(i - z)_\mathrm{usno}-(i - z)_0 ] \nonumber \\ & & + \dot{a}_i\,[t - t_0 ] ] \nonumber \\ z'_\mathrm{inst } & = & z_\mathrm{usno } + a_z + b_z\,(i - z)_\mathrm{usno } + k_z\,x \nonumber \\ & & + c_z\,[x - x_0]\,[(i - z)_\mathrm{usno}-(i - z)_0 ] \nonumber \\ & & + \dot{a}_z\,[t - t_0]]\end{aligned}\ ] ] where the symbolize instrumental magnitudes defined by with the flux in raw counts in the corrected frame and the exposure time ; the symbolize the magnitudes in the photometric standard - star catalog ; symbolizes airmass ; symbolizes time ( ut ) ; , , and the colors , etc , symbolize fiducial airmass , time , and colors ( arbitrarily chosen but close to mean values ) ; and the , , , , and parameters are , in principle , free to vary .the system sensitivities are the ; the tiny differences in photometric systems between the usno 40-inch and pt bandpasses are captured by the color coefficients ; the atmospheric extinction coefficients are the ; atmospheric extinction is a weak function of intrinsic stellar color parameterized by the ; and the parameterize any small time evolution of the system during the night .the above photometric equations are not strictly correct for an ab system , because in the ab system , there is no guarantee that the colors of standard stars through two slightly different filter systems will agree at zero color ; this agreement is assumed by the above equations . in an empirical system , such as the vega - relative magnitude system ,a certain star , such as vega ( and stars like it ) have zero color in all colors of all filter systems .the ab system is based on a hypothetical source with ; there is no guarantee that a source with zero color in one filter system will have zero color in any other .the offsets must be computed theoretically , using models of ccd efficiencies , mirror reflectivities , atmospheric absorption spectra , and intrinsic stellar spectral energy distributions .we have ignored this ( subtle ) point , since it only leads to offsets in the sensitivity parameters and does not affect photometricity or atmospheric extinction assessments . in practice ,the parameters are always fixed at theoretically derived values the parameters are only allowed to be non - zero in the next - day analysis ( section [ sec : nextday ] ) . because the design specification on the pt did not require correct airmass values in raw image headers , the airmass values are computed on the fly from the ra , dec , ut , and location of the observatory ( eg , ) .the reference airmass and colors are chosen to be roughly the mean on a typical night of observing , or ( in the ab system ) . in the next - day analysis ,the reference time is set to be the mean ut of the observations . in principlethe color coefficients could be determined globally and fixed once and for all .however , the pt filters have been shown to have some variation with time and with humidity ( they are kept in a low - humidity environment with some variability ) , so the robot fits them independently every night .freedom of the is allowed not because it is demanded by the data , but rather because measurements of the over time are an important part of data and telescope quality monitoring .typical color coefficient values ( which measure differences between the usno 40-inch telescope and pt bandpasses ) are quite a bit of experimentation went into the photometric equations .inclusion of theoretically determined parameters minutely improves the fit , but on a typical night there are not enough data to determine the parameters empirically ; the are held fixed .terms proportional to products of time and airmass , again , are not well constrained from a single night s data ; they are not included in the fit . to each bandpassa color is assigned ; the choices have been made to use the most well - behaved `` adjacent '' colors .( the color is not used for the equation because the boundary between and is , by design , on the 4000 break , making color transformations extremely sensitive to the precise properties of the filter curves . ) in the real - time analyses , the , , and parameters ( 15 total ) are fit to the counts from the aperture photometry ( and usno magnitudes , times , airmasses , etc ) with a linear least - square fitting routine , which iteratively removes 3.0-sigma outliers and repeats its fit .a typical night at apo shows extinction coefficients with factor of variations from night to night .all photometric measurements are weighted equally in these fits , because the error budget appears to be dominated by systematic errors in the usno catalog and sky subtraction ; the primary errors are not from photon statistics . as observing continues and iterative improvement to the usno catalogis made , we will be able to use a more sophisticated weighting model ; we do nt expect such improvements to significantly change the values of our best - fit photometric parameters . on a typical night , 5-band measurements are made of 20 to 50 standard stars in 10 to 15 standard - star fields .every ten minutes during the night ( from 18:00 to 07:00 , local time ) , the robot builds a www page in hypertext markup language ( html ) reporting on its most up - to - date photometric solution .the www page shows the best - fit , and parameter values , plots of residuals around the best photometric solution in the five bandpasses , and root - mean - squares ( rms ) of the residuals in the five bandpasses .parameters or rms values grossly out - of - spec are flagged in red .this feedback allows the observers to make real - time decisions about photometricity , and confirm expectations based on visual impressions of and 10- camera data on atmospheric conditions .the www page also includes an excerpt from the observers manually entered log file .this allows those monitoring the site remotely to compare the observers and robot s photometricity judgements .figures [ fig : feedback1 ] and [ fig : feedback2 ] show examples of the photometricity output on the www page on two typical nights .the www page also shows data from the apo weather , dust , and cloud monitors for comparison by the observers , as do figures [ fig : feedback1 ] and [ fig : feedback2 ] . since www pages in html are simply text files , they are trivially output by ` idl ` .the figures on the www pages are output by ` idl ` as postscript files and then converted to a ( crudely ) antialiased pixel image with the unix command ` ghostscript ` and the unix ` pbmplus ` package .at 07:00 ( local time ) each day , the robot begins its `` next - day '' analysis , in which all the data from the entire night ( just ended ) are re - reduced and final decisions are made about photometricity and data acceptability .bias frames taken during the night are identified by header keywords .the bias frames are averaged , with iterated outlier rejection , to make a bias image .any pixels with values more than 10-sigma deviant from the mean pixel value in the bias image are flagged as bad in a bias mask image .dome and sky flat frames are identified by header keywords .these raw images are bias - subtracted using the bias image .each bias - subtracted image is divided by its own mean , which is estimated with iterated outlier rejection .the bias - subtracted and mean - divided flat frames are averaged together , again with iterated outlier rejection , to make five flat images , one for each bandpass .any pixels with values more than 10-sigma deviant from the mean are flagged as bad in flat mask images .the and -bandpass images are affected by an interference fringing pattern , which is modeled as an additive distortion . any or -bandpass image taken during the night with exposure time s is identified .these long - exposure frames are bias - subtracted and divided by the flat .each is divided by its own mean , again estimated with iterated outlier rejection .these mean - divided frames are averaged together , again with iterated outlier rejection to make two fringe correction images , in the and bandpasses . again , 10-sigma outlier pixels are flagged as bad in fringe mask images .constant bias , flat and fringe images are used for the entire night ; there is no evidence with this system that there is time evolution in any of these .all the raw images from the night are bias - subtracted and flattened with these new same - night bias and flat images .the and -bandpass images are also fringe - corrected .thus a new set of corrected frames is constructed for that night .the real - time astrometry solutions are re - used in these new corrected frames .all photometry is re - measured in the new corrected frames .a final photometric parameter fit is performed with the new measurements , again with removal of data declared bad by the cloud - camera veto , and with iterated outlier rejection .the only difference is that the time evolution terms are allowed to be non - zero .because we expect non - zero terms to be caused by changing atmospheric conditions , we experimented with allowing time evolution in the extinction coefficients ( eg , terms ) . unfortunately , such terms involve non - linear combinations of input data ( time times airmass ) and can only be added without introducing biases if there are * also * terms ; ie , it is wrong in principle to include terms without also adding terms , especially in the face of iterated outlier rejection .we found that adding both and terms did not improve our fits relative to simply adding terms , so we have not included terms . with more frequent standard - star sampling , the time resolution of the systemwould be improved and terms would , presumably , improve the fits .the entire next - day re - analysis takes between three and six hours , depending on the amount of data .much of the computer time is spent swapping processes in and out of virtual memory ; with more efficient code or larger ram the re - analysis time could be reduced to under two hours .the re - analysis would take one to two hours longer if it were necessary to repeat the astrometric solutions found in the real - time analysis . at the end of the re - analysis ,the robot constructs a `` final '' www page , similar to the real - time feedback www page , but with the final photometric solution and residuals .parameters grossly out - of - spec are shown in red .also , the robot sends an email to a sdss email archive and exploder , summarizing the final parameter values and rms residuals in the five bands . for the robot s purposes , a night is declared `` photometric '' if the rms residual around the photometric solution ( after iterated outlier rejection at the 3-sigma level ) in the ] mag .if the night is declared photometric , then the bias , flat , and fringe images and their associated pixel masks are declared `` current , '' to be used for the real - time analysis of the following nights .figure [ fig : cloudveto ] shows the final fits for an example night , with and without the inclusion of the veto of data taken during cloudy periods .this is a night which would have been declared marginally non - photometric without the cloud - camera veto , but became photometric when the cloudy periods were removed .all raw pt data are saved in a tape archive at fermi national accelerator laboratory ( fnal ) . in addition, about one year s worth of the most recent data are archived on the robot machine itself , on a pair of large disk drives .the photometric parameters from each photometric night are kept in a fits binary table ( ) , labeled by observation date .these parameter files are mirrored in directories at fnal and elsewhere ( dwh s laptop , for example ) . because the astrometric solutions are somewhat time - consuming ,the astrometric headers for all the corrected frames are also saved on disk on the robot machine and mirrored , along with each night s bias , flat , and fringe images . because the astrometric header information is all saved , along with the bias , flat , and fringe images ,the corrected frames can be reconstructed trivially and quickly from the raw data .the archived output data from the photometricity monitor robot is not just useful for verifying and analyzing contemporaneous data from the observatory .it contains a wealth of scientific data useful for analyzing long - term behavior and pathologies of the site , the hardware , and the standard star catalog .many of these analyses will be performed and presented elsewhere , as the site monitoring data builds up . as an example , figure [ fig : ext_covar ] shows the atmospheric extinction coefficients $ ] plotted against one another , for most of the photometric nights in roughly one - third of a year .this figure shows the variability in the extinction coefficients , as well as their covariance .if the variability in the extinction is due to varying optical depth of a single component of absorbing material , the covariances should fall along the grey diagonal lines ( ie , ) .although the vs plot is consistent with this assumption , the vs plot appears not to be .perhaps not surprisingly , atmospheric extinction must be caused by multiple atmospheric components .it is at least slightly surprising to us that the variability in -bandpass extinction is smaller than would be predicted from a one - component assumption and the variability in the -bandpass extinction .the success of this ongoing project shows that robust , hands - off , real - time and next - day photometricity assessment and atmospheric extinction measurement is possible .there is much lore about photometricity , site variability , and precise measurement in astronomy .most of this lore can be given an empirical basis with a simple system like the one described in this paper .it is worth emphasizing that the observing hardware used by the robot is _ not _ dedicated to the robot s site monitoring tasks .the pt is being used to calibrate sdss data ; it only spends about one third of its time taking the observations of catalogued standards which are used for photometricity and extinction measurements .this shows that a robot of this type could , with straightforward ( if not trivial ) adjustment , be made to `` piggyback '' on almost any observational program , provided that some fraction of the data is multi - band imaging of photometric standard stars .the robot does not rely on the images having accurate astrometric header information , or accurate text descriptions or log entries ; it finds standard stars in a robust , hands - off manner .many observatories could install a robot of this type with _ no hardware cost whatsoever ! _ a site with no appropriate imaging program on which the robot could `` piggyback '' could install a small telescope with a good robotic control system , a ccd and filter - wheel , and the robot software described here , at fairly low cost . the telescope need only be large enough to obtain good photometric measurements of some appropriate set of standard stars .all of the costs associated with such a system are going down as small , robotic telescopes are becoming more common ( eg , , , ) .the robot works adequately without input from the cloud camera , as long as data are not taken during cloudy periods , or as long as the observers can mark , in some way accessible to the robot , cloudy data . on the other hand , pixel array cameras , with no moving parts ( unlike the prototype camera working at apo ) are now extremely inexpensive andwould be easy to install and use at any observatory .the robot system was developed and implemented in a period of about nine months .it has been a very robust tool for the sdss observers .the rapid development and robust operation can be ascribed to a number of factors : the robot design philosophy has always been to make every aspect of the robot s operation as straightforward as possible .we have only added sophistication to the robot s behavior as it has been demanded by the data .the ` idl ` data analysis language has primitives useful for astronomy and it operates on a wide range of platforms and operating systems . perhaps above all , the pt is a stable , robust telescope ( uomoto et al in preparation ) . without objective , well - understood site monitoring like that provided by this simple robot , analyses of archived and queue - observing data will always be subject to some suspicion . at apo , the visual monitor robot has been a very inexpensive and effective tool for building confidence in observer decisions and for providing feedback to data analysis . with this robot operating , apo has better monitoring of site conditions and data quality than most existing or even planned observatories .comments , suggestions , data , computer code , bug reports and hardware maintenance were provided generously by bill boroski , jon brinkmann , scott burles , bing chen , daniel eisenstein , masataka fukugita , steve kent , jill knapp , wayne landsman , brian lee , craig loomis , robert lupton , pete newman , eric neilsen , kurt ruthsmandorfer , don schneider , steph snedden , chris stoughton , michael strauss , douglas tucker , alan uomoto , brian yanny , don york , our anonymous referee , and the entire staff of the apache point observatory .the sloan digital sky survey ( sdss ) is a joint project of the university of chicago , fermilab , the institute for advanced study , the japan participation group , the johns hopkins university , the max - planck - institute for astronomy ( mpia ) , the max - planck - institute for astrophysics ( mpa ) , new mexico state university , princeton university , the united states naval observatory , and the university of washington .apache point observatory , site of the sdss telescopes , is operated by the astrophysical research consortium ( arc ) .funding for the project has been provided by the alfred p. sloan foundation , the sdss member institutions , the national aeronautics and space administration , the national science foundation , the u.s .department of energy , the japanese monbukagakusho , and the max planck society .the sdss web site is http://www.sdss.org/. adelman , s. j. , dukes , r. j. , & adelman , c. j. editors 1992 , asp conf .ser . 28 : automated telescopes for photometry and imaging , c. et al 1999 , , 398 , 400 baruch , j. e. & da luz vieira , j. 1993 , , 1945 , 488 boroson , t. , davies , j. , and robson , i. , editors 1996 , asp conf .87 : new observing modes for the next century , t. a. , harmer , d. l. , saha , a. , smith , p. s. , willmarth , d. w. , and silva , d. r. 1998 , , 3349 , 41 brunner , r. j. , djorgovski , s. g. , and szalay , a. s. , editors 2001 , asp conf .ser . 225 : virtual observatories of the future , m. and greisen , e. w. 2000 , in manset , n. , veillet , c. , and crabtree , d. , editors , asp conf .ser . 216 : astronomical data analysis software and systems ix , 571 castro - tirado , a. j. et al . 1999 , , 138 , 583 , w. d. , tody , d. , and pence , w. d. 1995 , , 113 , 159 filippenko , a. v. , editor 1992 , asp conf .ser . 34 : robotic telescopes in the 1990s fukugita , m. , ichikawa , t. , gunn , j. e. , doi , m. , shimasaku , k. , and schneider , d. p. 1996, , 111 , 1748 , f. and rozas , m. 1998 , , 3349 , 319 hull , c. l. , limmongkol , s. , and siegmund , w. a. 1994 , , 2199 , 852 landolt , a. u. 1992 , , 104 , 340 lupton , r. h. , gunn , j. e. , ivezic , z. , knapp , g. r. , kent , s. , and yasuda , n. 2001 , in ? ? ,editors , asp conf .ser . ? ? : astronomical data analysis software and systems x , in press manset , n. , veillet , c. , and crabtree , d. , editors 2000 , asp conf . ser . 216 : astronomical data analysis software and systems ix , p. , guerrieri , m. , and joyce , r. 2000 , new astronomy , 5 , 25 , d. m. , plante , r. l. , and roberts , d. a. , editors 1999 , asp conf .ser . 172 : astronomical data analysis software and systems viii monet , d. et al 1998 , the usno - sa2.0 catalog , us naval observatory , washington dc . , j. b. and gunn , j. e. 1983 , , 266 , 713 , f. .r. and querci , m. 2000 , , 273 , 257 , j. l. , lasker , b. m. , mclean , b. j. , sturch , c. r. , and jenkner , h. 1990 , , 99 , 2059 smart , w. m. 1977 , textbook on spherical astronomy , cambridge university press , 6th edition , p. b. 1987 , , 99 , 191 , p. b. 1992 , in worrall , d. m. , biemesderfer , c. , and barnes , j. , editors , asp conf .ser . 25 : astronomical data analysis software and systems i , 297 , k. g. , granzer , t. , boyd , l. j. , and epand , d. h. 2000 , , 4011 , 157 , r. p. j. 2000 , in manset , n. , veillet , c. , and crabtree , d. , editors , asp conf .ser . 216 : astronomical data analysis software and systems ix , 101 valdes , f. g. , campusano , l. e. , velasquez , j. d. , & stetson , p. b. 1995 , , 107 , 1119 , d. c. , greisen , e. w. , and harten , r. h. 1981 , , 44 , 363 , d. g. et al 2000 , , 120 , 1579
an unsupervised software `` robot '' that automatically and robustly reduces and analyzes ccd observations of photometric standard stars is described . the robot measures extinction coefficients and other photometric parameters in real time and , more carefully , on the next day . it also reduces and analyzes data from an all - sky camera to detect clouds ; photometric data taken during cloudy periods are automatically rejected . the robot reports its findings back to observers and data analysts via the world - wide web . it can be used to assess photometricity , and to build data on site conditions . the robot s automated and uniform site monitoring represents a minimum standard for any observing site with queue scheduling , a public data archive , or likely participation in any future national virtual observatory .
in terms of effectiveness , the method of lie - integration is one of the most competitive algorithms for numerical computation of gravitational -body dynamics .unlike the `` classical '' ways for numerical integration , this method computes the taylor - coefficients of the solution ( see * ? ? ? * ) .hence , the integration itself is relatively straightforward once these coefficients are known .the derivation of the taylor - coefficients for a particular ordinary differential equation is based on the so - called lie - operator .recalling the basics of this method , we define this operator as and by involving this definition , an advancement by of the ordinary differential equation can be written as the numerical method called lie - integration is the finite approximation of the above equation for exponential expansion ( up to a certain order which can either be fixed or be adaptively varied , see also sec . 3.1 in * ?in order to effectively obtain these coefficients , recurrence formulae can be applied for the cartesian coordinates of the orbiting bodies which are directly bootstrapped with the initial conditions .such formulae are known for the gravitational -body problem .a similar kind of relation has been obtained for the restricted three - body problem , and relativistic and non - gravitational effects ( such as yarkovsky force ) can be included as well .in addition , semi - analytic calculations can also be performed to obtain parametric derivatives of observables with respect to orbital elements .in this paper we present such recurrence formulae for the orbital elements in the case of spatial gravitational -body problem .recently , the relations for planar orbital elements have been derived .therefore , our goal now is to extend these relations to the third dimension by including the orbital elements _ related _ to the orbital inclination and ascending node. it should be noted , however , that the relations are not obtained for the longitude of ascending node directly , since it is meaningless in the limit . in the following section , sec .[ sec : lienbody ] , we describe the problem itself and the recurrence relations for the cartesian coordinates and velocities . the discussion of the spatial problem is split into three parts . sec .[ sec : lieangularmomentum ] details the angular momentum vector and the related orbital orientation .the next part , sec . [ sec : lieeccentricity ] shows how the orbital eccentricity can be treated in the spatial problem .the set of relations is ended with the mean longitude ( sec .[ sec : liemeanlongitude ] ) . in sec .[ sec : higherorderderivatives ] we demonstrate how higher order derivatives are obtained .our conclusions are summarized in sec .[ sec : summary ] .if we consider cartesian coordinates and velocities , the recurrence relations for the spatial gravitational -body problem have the same structure as in the planar case . similarly to , let us fix one of the bodies ( e.g. the sun in the case of the solar system ) at the center and this body is orbited by additional ones , indexed by . in total , we deal with bodies , having a mass of and , respectively . if we denote the coordinates and velocities of the body by and , we can define the central and mutual distances and as and , the inverse cubic distances and and the standard gravitational parameters .the quantities , and are also employed in the series of recurrence relations . with these quantities ,the recurrence relations for the coordinates and velocities can be written as , \end{aligned}\ ] ] while the relations for and also have the same structure .the relations for the reciprocal cubic distances can be computed in a similar manner as it is done in the planar case , for instance , using eqs .( 3)(6 ) from .once the recurrence relations are obtained and evaluated with the appropriate initial conditions , temporal evolution can be computed with the finite approximation of here the summation limit refers to the maximum integration order .of course , this calculation is performed not only for the coordinates but for all of the cartesian coordinates and velocities .in the following , we detail the computations and relations comprehending the orbital angular momentum and the orientation of the orbit . in the case of the planar problem ,the angular momentum is a pseudoscalar since it is the hodge - dual of a skew - symmetric tensor of rank 2 . in the spatial case ,the angular momentum is still a skew - symmetric tensor of rank 2 , hence it will have a 3 component dual in a form of a pseudovector . for the body ,let us denote these 3 components by , and , respectively .these are computed as the first order lie - derivatives of these pseudovector components can similarly be computed like the pseudoscalar angular momentum in the planar case , viz .}_{ij}\label{eq : cxlie } , \\lc_{yi } & = & \sum\limits_{j\ne i}gm_j\hat\phi_{ij}s^{[y]}_{ij}\label{eq : cylie } , \\lc_{zi } & = & \sum\limits_{j\ne i}gm_j\hat\phi_{ij}s^{[z]}_{ij}\label{eq : czlie},\end{aligned}\ ] ] where }_{ij} ] and }_{ij} ] is needed to be computed , using the rule presented in sec .[ sec : liefractions ] . here , the numerator is ( with zero lie - derivatives ) , so eq . ( [ eq : fractions ] )can further be simplified .alternatively , eq . ( 51 ) of can be used considering the exponent of .* once ] as a separate variable and store it accordingly in conjunction with its higher order derivatives .in addition , a trilinear expansion can also be speeded up if a product like is expanded in two bilinear substeps , namely first one compute in the usual manner then is written as this kind of optimization reduces the number of operations from to , however , auxiliary variables and the respective arrays are needed to be introduced .the higher order relations for can also be considered similarly since the terms appearing in eq .( [ eq : llambdai ] ) are bi- , tri- or quadrilinear functions of the terms and quantities for which the recurrence relations have already been obtained .the terms , , , and are complex expressions , however , these are still _ rational functions _ of quantities for which the recurrence series are known .this paper completes the recurrence relations for the lie - derivatives of the osculating orbital elements in the case of the spatial -body problem .these relations can be exploited to integrate directly the equations of motions that are parameterized via the orbital elements .qualitatively , the advantages and disadvantages of this approach are the same what has been concluded for the planar problem .namely , evolving orbital elements instead of cartesian components results in larger stepsizes . on the other hand , the complex implementation and the need of more computing power ( for the actual evaluation a single step ) could yield only marginal benefit .an initial implementation for a demonstration and validation of the formulae presented in this article can be downloaded from our web page as well as these codes are available upon request .they are also included in the supplement appended to the electronic version of the paper .correspondingly to the planar case , coordinates and velocities do appear in the recurrence relations but in a form of purely auxiliary quantities .further studies can therefore focus on the elimination of the need of coordinates .this is particularly interesting in the case of mean longitude where the third direction is preferred .such derivations might significantly reduce the computing demands as well .* acknowledgments . *the author would like to thank a. lszl for his helpful comments about the tensor rank analysis .the author also thanks the anonymous referees for their thorough reviews of the manuscript .the author thanks lszl szabados for the careful proofreading .this work has been supported by the hungarian academy of sciences via the grant lp2012 - 31 .additional support is received from the hungarian otka grants k-109276 and k-104607 .bancelin , d. ; hestroffer , d. & thuillot , w. : numerical integration of dynamical systems with lie series .relativistic acceleration and non - gravitational forces .. astron . * 112 * , 221234 ( 2012 )
if one has to attain high accuracy over long timescales during the numerical computation of the -body problem , the method called lie - integration is one of the most effective algorithms . in this paper we present a set of recurrence relations with which the coefficients needed by the lie - integration of the orbital elements related to the spatial -body problem can be derived up to arbitrary order . similarly to the planar case , these formulae yields identically zero series in the case of no perturbations . in addition , the derivation of the formulae has two stages , analogously to the planar problem . namely , the formulae are obtained to the first order , and then , higher order relations are expanded by involving directly the multilinear and fractional properties of the lie - operator .
the last decades have been characterized by a progressive increase of data production and storage .indeed , the informatization of most aspects of human activities , ranging from simple tasks such as phone calls to shopping habits generates an ever increasing collection of data that can be organized and used for planning . at the same time, most scientific projects such as in genetics , astronomy and neuroscience generate large amounts of data that needs to be analyzed and understood .this trend has given rise to the new term _big data _ .once such data is organized in a dataset , it is necessary to find patterns concealed in the vast mass of values , which is the objective of _ data mining _ . because the identification of important patterns ( e.g. those that recur frequently or are rare ) is impossible to be performed manually, it is necessary to resort to automated pattern recognition .nevertheless , it is important to note that pattern recognition remains also critical for organizing and understanding smaller sets of data , such as in medical diagnosis , industrial quality control , and expensive data . the problem of pattern recognition consists in assigning classes or categories to observations or individuals .this can be done in two main ways : ( i ) with the help of examples or prototypes ( _ supervised classification _ ) ; and ( ii ) taking into account only the properties of the objects ( _ unsupervised classification _ or _ clustering _ ) .though seemingly simple , pattern recognition often turns out to be a challenging activity .this is mainly a consequence of _ overlap _ between the features of different groups in the data , i.e. objects in a class have similar properties as those of other classes .however , several other issues such as choice of features , noise , sampling , also impose further problems while classifying data .even when the features are well - chosen and the data has good quality ( properly sampled and without noise ) , the results of the classification will frequently vary with the choice of different pattern recognition methods .this situation is typically aggravated for sparse data , presence of noise , or non - discriminative features . in an attempt to circumvent such problem and to obtain more robust and versatile classifiers ,a number of pattern recognition methods have been proposed in the literature . despite the long tradition of pattern recognition research , there are no definite guidelines for choosing classifiers .so , those faced with the need to apply pattern recognition are left with the rather difficult task of choosing among several alternative methods .there are many works in the literature describing which classifiers are more suitable for specific tasks ( see e.g. and section related works ) , but only a few of them consider a systematic quantitative analysis of their performance .therefore , in this paper , we assess the performance of the classifiers in carefully chosen datasets , without trying to advocate for any specific method .this means that the dataset used in the study is of fundamental importance to the correct interpretation of the results .typical datasets employed to compare the performance of different methods include real world and/or artificial data .advantages of using real datasets include the presence of non - trivial relationships between variables , which may strongly influence the performance of a classifier , the fact that the obtained results will usually be of high confidence when used for samples obtained in the same domain and using a similar criteria , and the presence of noise or unreachable information about the samples ( hidden variables ) .but there is a main drawback associated with using real - world data .even if one manage to consistently compare the results obtained with hundreds of real world datasets , the results will still be specific to the datasets being used .trying to use the information gained in such analyses to a different dataset will most likely be ineffective .furthermore , obtaining more real data to evaluate other classifier characteristics represents sometimes an arduous task .this is the case of applications whose acquisition process is expensive .for this reason , here we chose synthetic datasets .although such datasets are often not representative of specific real - world systems , they can still be used as representations of large classes of data .for example , we can define that all variables in the dataset will have a pearson correlation of 0.8 , and study the behavior of the classifiers when setting this as the main data constrain .a natural choice of distribution for the variables in the dataset is the multivariate normal distribution .this choice is supported by the well - known central limit theorem , which states that , under certain conditions , the mean of a large number of independent random variables will converge to a normal distribution .this ubiquitous theorem can be used to conclude that , between all infinite possibilities of probability density distributions , the normal distribution is the most likely to represent the data at hand .a second possible choice of data distribution may be the power - law distribution .this is so because there is a version of the central limit theorem stating that the sum of independent random variable with heavy - tailed distributions is generally power - law distributed .nevertheless , here we use only normal distribution , leaving power - law distributed variables for a future study .since one of our main concerns is making an accessible practical study of the classifiers , we decided to only analyze classifiers available in the weka software , which was chosen because of its popularity among researchers .in addition , since the software is open - source , any researcher can look at the code of any specific classifier and confirm the specific procedure being used for the classification . since weka has many classifiers available, we decided to select a subset of the most commonly used ones according to .one distinctive feature of the present work is the procedure we use to compare classifiers .many works in the literature try to find the best accuracy that a classifier can give and then present this value as the quality of the classifier .the truth is that finding the highest accuracy for a classifier is usually a troublesome task .additionally , if this high accuracy can only be achieved for very specific values of the classifier parameters , it is likely that for a different dataset the result will be worse , since the parameter was tuned for the specific data analyzed .therefore , besides giving a high accuracy , it is desirable that the classifier can give such values for accuracy without being too _ sensitive _ regarding changes of parameters .that is , a good classifier must provide a good classification for a large range of values of its parameters . in order to study all aforementioned aspects of the classifiers ,this work is divided in three main parts .first , we compare the performance of the classifiers when using the default parameters set by weka .this is probably the most common way researchers use the software .this happens because changing the classifier parameters in order to find the best classification value is a cumbersome task , and many researchers do not want to bother with that .our second analysis concerns the variation of single parameters of the classifiers , while maintaining other parameters in the default value .that is , we study how the classification results are affected when changing each parameter .therefore , we look for the parameters that actually matters for the results , and how one can improve their results when dealing with such parameters . finally , in order to estimate the optimum accuracy of the classifier , as well as verify its sensitivity to simultaneous changes of its parameters , we randomly sample sets of parameter values to be used in the classifier .the paper is organized as follows .firstly , we swiftly review some previous works aiming at comparing classifiers .we then describe the generation of synthetic datasets and justify the parameters employed in the algorithm .next , we introduce the measurements used to quantify the classifiers performance .we then present a quantitative comparison of classifiers , followed by the conclusions .typical works in the literature dealing with comparison between classifiers can be organized into two main groups : ( a ) comparing among few methods for the purpose of validation and justification of a new approach ; and ( b ) systematic qualitative and quantitative comparison between many representative classifiers .examples of qualitative analysis in ( b ) can be for example found in .these studies perform a comprehensive analysis of several classifiers , describing the drawbacks and advantages of each method , without considering any quantitative tests .a quantitative analysis of classifiers was performed in , where 491 papers comparing quantitatively at least two classification algorithms were analyzed .a comparison of three representative learning methods ( naive bayes , decision trees and svm ) was conducted in , concluding that naive bayes is significantly better than decision trees if the area under curve is employed as a performance measurement .other quantitative studies include the comparison of neural networks with other methods and an extensive comparison of a large set of classifiers over many different datasets , which showed that svms performances very well on classification tasks .finally , quantitative comparisons between classifiers can be also found in specific domain problems , such as in bioinformatics , computer science , medicine and chemistry .in this section we present a generic methodology to construct artificial datasets modeling the different characteristics of real datasets . in addition , we describe the measurements used to evaluate the quality of the classifiers .as noted above , there is a considerable number of reviews in the literature that use real data in order to compare the performance of classifiers .although this approach is useful when one wants to test the performance for specific classes of data , the small domain of possible cases analyzed renders the results insignificant for a true performance test .also , with real data it is impossible to systematically study how the classification is being influenced by different variances and correlations between the data , the dimension of the problem , number of classes , distribution of elements per class and , most importantly , the separation between the classes . in order to approach these problems , while having a diversified dataset to test the general purpose classifiers , we use a multivariate gaussian artificial data generation where many of the parameters chosen are justified by real data , but we can still test variations of them .data distributions may occur in many different forms .we chose a gaussian distribution because it has the potential to represent a large ensemble of possible data occurrences on the real world .this observation is supported by the central limit theorem , since it is assumed that the variables are independent and identically distributed . herewe present a novel method for generating random datasets with a given ensemble of covariances matrices , which was strongly based on the study made by hirschberger et al .we aim at generating classes of data with features for each object , with the additional constraint that the number of objects per class is given by the vector .this problem is mathematically restated as finding sets comprising -dimensional vectors , where each set has a number of elements specified by . , and are referred to as _ strong _parameters , in the sense that they do not bring any information about the relationships between the objects ( or vectors ) .furthermore , we aimed at generating data complying with the three following constraints : * * constraint 1 * : the variance of the -th feature of each class is drawn from a fixed distribution , . * * constraint 2 * : the correlation between the -th and -th dimension of each class are drawn from another fixed distribution , . * * constraint 3 * : we can freely tune the expected separation between the classes , given by parameter , which is explained below .traditionally , constraints 1 and 2 are not fully satisfied to generate the data .many studies impose that all the classes display approximately the same variances and correlations , by defining an ensemble of covariance matrices with a fixed spectrum constraint . unfortunately , this approach is somewhat artificial to generate realistic data , since the assumption that all data classes share similar relationships between their features is quite unlikely .our approach is more general because , given the shape of the correlation distribution ( e.g. u - shaped ) , the classes can exhibit all kinds of correlations . in order to generate the data with the strong parameters , and complying with constraints 1 , 2 and 3 , we need covariance matrices ( one for each class ) , where each diagonal and off - diagonal element is drawn , respectively , from and .the most common approach is to randomly draw the mentioned matrix elements from probability density distributions given by and in order to construct the desired matrices .unfortunately , this process does not guarantee a valid covariance matrix because every covariance matrix must be positive and semi - definite . to overcome this problem we use a well - known property stating that for every matrix , the is positive and semi - definite .this property allow us to create a random matrix that will generate a valid covariance matrix .the matrix is known as _ root _ matrix . what is left to usis to define a convenient root matrix so that follows constraints 1 , 2 and 3 .hirschberger et al . came up with an elegant demonstration on how to create a covariance matrix following constraints 1 and 2 .actually , by using their methodology it is even possible to set the skewness of , but this property will not be employed here , since our goal is to generate off - diagonal elements distributed according to a normal distribution . using our algorithm , it is possible to create datasets having the following parameters : * * number of objects per class * : the number of instances in each class can be drawn according to a given distribution .the most common distributions to use are the normal , power - law and exponential distributions . nevertheless , in order to simplify our analysis , here we use classes having an equal number of instances . * * number of classes * : we use .this parameter is not varied throughout the study because we found that the results did not appreciably change for different number of classes ( including the binary case ) . ** number of features * : the case represents the most simple case , since it permits the easy visualization of the data . in order to improve the discriminability of the data ,real world datasets oftentimes are described by a larger number of features .here we vary in the range $ ] .hereafter , we refer to the dataset described by features as db . * * standard deviation of the features * : for each class , the standard deviation of each feature is drawn according to a given distribution .the process is repeated for each class , using the same distribution . * * correlation between features * : for each class , the correlations between the features are drawn according to a given distribution .the process is repeated for each class using the same distribution .this means that each class of our dataset will show different kinds of correlation .for example , instances from one class may be described by redundant features , while the same features may be much more efficient in describing samples form other classes .the most common choices for are : ( a ) _ uniform _ , to represent heterogeneous data ; ( b ) gaussian centered in zero , for mostly uncorrelated ; and ( c ) _ u - shaped _ , for data with strong correlations .here we chose a uniform distribution for the correlations . * * separation between the data ( ) * : it is a parameter to be varied throughout the experiments , quantifying how well - separated are the classes , compared to their standard deviation .this parameter is simply a scaling of the variance of the features for each class . since we randomly draw the mean , , for each class in the range , can be used to define an expected separation between the classes .if is large , the classes are well - localized and will present little overlap .otherwise , if is small , the opposite is true .clearly , the separation given by depends on the dimension of the space .nevertheless , there is no need to define a normalization for , because we are just comparing classifiers and not different configurations of the data . throughout the paper we varied the number of features and the separation between classes ( ) . in figure[ f : random_data ] we show some examples of the data that can be generated by varying in a two - dimensional dataset . a fundamental aspect that should be consider when comparing the performance of classifiers is the proper definition of what _ quality _ means .it is impossible to define a single metric that will provide a fair comparison in all possible situations .this means that quality is usually specific to the application and , consequently , many measurements have been proposed . nevertheless , there are some measurements that have widespread use in the literature , the most popular being the accuracy rate , f - measure ( sometimes together with precision and recall ) , kappa statistic , roc area under curve and the time spent for classification ( see for a comprehensive explanation of such measurements ) . because we are mostly interested in a more practical analysis of the classifiers , we use only the accuracy rate , which is defined as the number of true positives plus the number of true negatives , divided by the total number of instances . in the literature , oftentimes the average accuracy rate is employed to evaluate the performance of classifiers .this practice is so ubiquitous because many researchers decide to use a number of different kinds of measurements , like the ones previously mentioned , and the specific analysis of each metric turns out to be overly cumbersome .the consequence of such approach is that only the average , and at most the deviation of each metric end up being analyzed . in the present study we only used the accuracy rate . to measure the performance of the classifiers ,we generate artificial datasets using the method presented in the previous section and calculate some statistics .the principal quantity extracted from each dataset is the average accuracy rate .in addition , we also compute the variation of accuracy across datasets for this quantity is useful to quantify the confidence of the classifier when the dataset is changed . as such ,if high values of both average and standard deviation appears then it is possible to state that the classifier performs well , but care must be taken when analyzing a new dataset .the standard deviation of accuracy rate computed over instantiations of the classifier with distinct parameters is useful to quantify the sensitivity with respect to a given parameter .the performance of the classifiers was evaluated according to three methodologies .the default values provided by weka were used in the first strategy .we then examined the influence of each parameter on the discriminability of the data .finally , we developed a multivariate strategy .the classifiers considered in the analysis are presented in table [ t : classifier_names ] . throughout the results we used db2f ,db3f db10f to refer to the datasets comprising instances characterized by 2 , 3 10 features , respectively . .[t: classifier_names]list of classifiers evaluated in our study .the abbreviated names used for some classifiers are indicated after the respective name . [ cols="<,<,<",options="header " , ] + +machine learning methods have been applied to recognize patterns and classify instances in a wide variety of applications . currently , several researchers / practioners with different expertise have employed computational tools such as weka to study particular problems . since the appropriate choice of parameters requires a certain knowledge of the underlying mechanisms behind the algorithms , oftentimes these methods are applied with their default configuration of parameters . using the weka software, we evaluated the performance of classifiers using distinct configurations of parameters in order to verify whether it is feasible to improve their performance .a summary of the main results obtained in this study is provided in table [ t : tabela_resumo ] . & * case * & db2 & db10 + & & naive bayes & knn + & & logistic & perceptron + & & knn & c4.5 + & & svm & bayes net + & & svm & svm + & & knn & knn + & & simple cart & simplecart + & & c4.5 & perceptron + + + the analysis of parameters in two - dimensional problems revealed that the naive bayes displays the best performance among all nine classifiers evaluated with default parameters . in this scenario ,the svm turned out to be the classifier with the poorest performance .when instances are described by a set of ten features , the knn outperformed by a large margin the other classifiers , while the svm retained its ordinary performance . when just one parameter is allowed to vary ,there is not a large variation in the accuracy compared with the classification achieved with default parameters .the only exceptions are the parameter k of the knn and parameter s of svm ( with puk kernel ) . in these cases ,the appropriate choice of the parameters enabled an average increase of 6% in accuracy .surprisingly , we found that when the same analysis is performed with a ten - dimensional dataset , the improvement in performance surpasses 20% for the svm .finally , we developed a strategy in which all the configuration of parameters are chosen at random . despite its outward simplicity , this strategy is useful to optimize svm performance especially in high - dimensional problems , since the average increase provided by this strategy is higher than 20 % .another important result arising from the experiments is the strong influence of the number of features on the performance of the classifiers .while small differences in performance across distinct classifiers were found in low - dimensional datasets , we found significative differences in performance when we analyzed problems involving several features . in high - dimensional tasks , knn and svm turned out to be the most accurate techniques when default and alternative parameters were considered , respectively .most importantly , we found that the behavior of the performance with the number of features follows three distinct patterns : ( i ) almost constant ( perceptron ) ; ( ii ) monotonic increase ( knn ) , and ( iii ) monotonic decrease ( bayes net ) .these results suggest that number of features of the problem plays a key role on the choice of algorithms and , therefore , it should be considered in practical applications .the results obtained here suggest that for low dimension classification tasks , weka s default parameters provide accuracy rates close to the optimal value , with a few exceptions .the highest discrepancies arose in high - dimensional problems for the svm , indicating that the use of default parameters in these conditions is not recommended in cases where the svm must be employed .one could pursue this line of analysis further to probe the properties of classifiers with regard to other factors such as number of classes , number of instance per class and overlapping between classes .it is also very important to probe the performance in problems where the amount of instances employed to train is scarce , as it happens in occasions when data acquisition represents an expensive , painstaking endeavor .the authors acknowledge financial support from cnpq ( brazil ) ( grant numbers 573583/2008 - 0 , 208449/2010 - 0 , 308118/2010 - 3 and 305940/2010 - 4 ) , fapesp ( grant numbers 2010/00927 - 9 , 2010/19440 - 2 , 2011/22639 - 8 , 2011/50761 - 2 , 2013/06717 - 4 and 2013/14984 - 2 ) and nap escience - prp - usp .10 marquand af , filippone m , ashburner j , girolami m , mourao - miranda j , et al .( 2013 ) automated , high accuracy classification of parkinsonian disorders : a pattern recognition approach .plos one 8(7 ) : e69237 .montavon g , rupp m , gobre v , vazquez - mayagoitia a , hansen k , tkatchenko a , mller k - r , lilienfeld oa ( 2013 ) machine learning of molecular electronic properties in chemical compound space .new journal of physics 15 095003 .yang j , frangi af , yang jy , zhang d , jin z ( 2005 ) kpca plus lda : a complete kernel fisher discriminant framework for feature extraction and recognition .ieee transactions pattern analysis and machine intelligence 27:230244 .tavares lg , lopes hs , lima cre ( 2008 ) a comparative study of machine learning methods for detecting promoters in bacterial dna sequences .international conference on intelligent computing 5227:959966 .hirschberger m , qi y , steuer re ( 2007 ) randomly generating portfolio - selection covariance matrices with specified distributional characteristics .european journal of operational research 177:16101625 .
pattern recognition techniques have been employed in a myriad of industrial , medical , commercial and academic applications . to tackle such a diversity of data , many techniques have been devised . however , despite the long tradition of pattern recognition research , there is no technique that yields the best classification in all scenarios . therefore , the consideration of as many as possible techniques presents itself as an fundamental practice in applications aiming at high accuracy . typical works comparing methods either emphasize the performance of a given algorithm in validation tests or systematically compare various algorithms , assuming that the practical use of these methods is done by experts . in many occasions , however , researchers have to deal with their practical classification tasks without an in - depth knowledge about the underlying mechanisms behind parameters . actually , the adequate choice of classifiers and parameters alike in such practical circumstances constitutes a long - standing problem and is the subject of the current paper . we carried out a study on the performance of nine well - known classifiers implemented by the weka framework and compared the dependence of the accuracy with their configuration parameter configurations . the analysis of performance with default parameters revealed that the k - nearest neighbors method exceeds by a large margin the other methods when high dimensional datasets are considered . when other configuration of parameters were allowed , we found that it is possible to improve the quality of svm in more than 20% even if parameters are set randomly . taken together , the investigation conducted in this paper suggests that , apart from the svm implementation , weka s default configuration of parameters provides an performance close the one achieved with the optimal configuration . = 1
many static analyses have been developed to check safety properties of sequential programs while more and more software applications are multithreaded .naive approaches to analyze such applications would run by exploring all possible interleavings , which is impractical .some previous proposals avoid this combinatorial explosion ( see related work ) .our contribution is to show that _ every _ static analysis framework for single - thread programs extends to one that analyzes multithreaded code with dynamic thread creation and with only a modest increase in complexity .we ignore concurrency specific bugs , e.g. , race conditions or deadlocks , as do some other authors .if any , such bugs can be detected using orthogonal techniques .[ [ outline ] ] outline + + + + + + + we describe in section [ syntax ] a toy imperative language .this contains essential features of c with posix threads with a thread creation primitive .the main feature of multithreaded code is that parallel threads may _ interfere _ , i.e. , side - effects of one thread may change the value of variables in other threads . to take interference between threads into account ,we model the behavior of a program by an infinite transition system : this is the operational semantics of our language , which we describe in section [ subsection : evol ] .it is common practice in abstract interpretation to go from the concrete to the abstract semantics through an intermediate so - called collecting semantics . in our casea different but similar concept is needed , which we call semantics , and which we introduce in section [ section : nonst ] .this semantics will discover states , accumulate transitions encountered in the current thread and collect interferences from other threads .the main properties of this semantics proposition [ prop : basic ] and theorem [ theorem : denot]are the technical core of this paper .these properties allow us to overapproximate the semantics by a denotational semantics .section [ abstract ] then derives an abstract semantics from the semantics through abstract interpretation .we discuss algorithmic issues , implementation , question of precision , and possible extensions in section [ algosm ] , and examine the complexity of our analysis technique in section [ section : complexity ] , and conclude in section [ section : conclusion ] .[ [ related - work ] ] related work + + + + + + + + + + + + a great variety of static analyses that compute safety properties of single - thread programs have been developed , e.g. , intervals , points - to - graph , non - relational stores or relational stores such as octagons .our approach is similat to rugina and rinard , in the sens that we also use an abstract semantics that derives tuples containing information about current states , transitions of the current thread , and interference from other threads . while their main parallel primitive is , which runs too threads ans waits for their completion before resuming computation , we are mostly interested in the more challenging thread creation primitive , which spawn a thread that can survive its father . in section[ improvement ] , we handle to show how they can be dealt with our techniques . some authors present generalizations of specific analyses to multithreaded code , e.g. , venet and brat and lammich and mller - olm , while our framework extends any single - threaded code analysis . our approach also has some similarities with flanagan and qadeer .they use a model - checking approach to verify multi - threaded programs .their algorithm computes a guarantee condition for each thread ; one can see our static analysis framework as computing a guarantee , too .furthermore , both analyses abstract away both number and ordering of interferences from other threads .flanagan and qadeer s approach still keeps some concrete information , in the form of triples containing a thread i d , and concrete stores before and after transitions .they claim that their algorithm takes polynomial time in the size of the computed set of triples. however , such sets can have exponential size in the number of global variables of the program .when the nesting depth of loops and thread creation statements is bounded , our algorithm works in polynomial time . moreover ,we demonstrate that our analysis is still precise on realistic examples . finally , while flanagan and qadeer assume a given , static , set of threads created at program start - up , we handle dynamic thread creation .the same restriction is required in malkis __ .the 3vmc tool has a more general scope .this is an extension of tvla designed to do shape analysis and to detect specific multithreaded bugs .however , even without multithreading , tvla already runs in doubly exponential time .other papers focus on bugs that arise because of multithreading primitives .this is orthogonal to our work .see for atomicity properties , locksimth and goblint tools for data - races and for deadlock detection using geometric ideas .the syntax of our language is given in fig .[ figsyntax ] .the syntax of the language is decomposed in two parts : commands ( ) and statements ( ) .a statement is a command with a return label where it should go after completion .e.g. , in fig [ subfig : while ] , a thread at label will execute \ccreate(\nr[a3 ] x:=x+1),\nlrefb{a1} ] be the partial function defined by }(j){\stackrel{\text{\tiny def}}{=}}\begin{cases } \ell & \text{if } i = j\\p(j ) & \text{if } i\in\dom(p)\smallsetminus\{j\}\\ \text{undefined } & \text{else}\\ \end{cases } ] the store that maps to the integer .the transitions generated by the statements extracted from fig .[ subfig : while ] are : x:=0,\nlref{a1}}=&\{((i , p,[x = n],\h),(i,{p[i \fleche \nlrefb{a1}]},[x=0],\h))\mid p(i)=\nlrefb{init1}\\ & \wedge i\in\ids\wedge n\in\mathbb{z}\}.\\ \tr{\nr[a3 ] x:=x+1,\lend}= & \{((i , p,[x = n],\h),(i,{p[i \fleche \lend]},[x = n+1],\h))\mid p(i)=\nlrefb{a3 } \\&\wedge i\in\ids\wedge n\in\mathbb{z}\}.\end{aligned}\ ] ]let be the set of labels of the statement .we also define by induction on commands , the set of labels of subthreads by , + , + , + , + and , for basic commands .a statement generates only transitions from its labels and to its labels , this is formalized by the following lemma : [ lemma : a ] if then and and . as a consequence of lemma [ lemma : a ] , we have the following lemma : [ lemma : abis ] if then for all state , if , during the execution of a statement , a thread creates another thred , then , the subthread is in a label of the command , furthermore , it is in .[ lemma : asub ] if and then .[ lemma : asub2 ] if and then .furthermore and .notice that in fig .[ figrules ] some statements are `` atomic '' .we call these statements _ basic statements_. formally , a basic statement is a statement of the form , or . on basic statement , we have a more precise lemma on labels : [ lemma : a]let be a basic statement .+ if then and and .to prepare the grounds for abstraction , we introduce an intermediate semantics , called semantics , which associates a function on configurations with each statement .the aim of this semantics is to associate with each statement a transfer function that will be abstracted ( see section [ abstract ] ) as an abstract transfer function .a _ concrete configuration _ is a tuple : is the current state of the system during an execution , , for _ guarantee _ , represents what the current thread and its descendants can do and , for _ assume _ , represents what the other threads can do .formally , is a set of states , and and are sets of transitions containing .the set of concrete configurations is a complete lattice for the ordering .proposition [ prop : coroc ] will establish the link between operational and semantics .figure [ fig : states ] illustrates the execution of a whole program .each vertical line represents the execution of a thread from top to bottom , and each horizontal line represents the creation of a thread . at the beginning ( top of the figure ) , there is only the thread . during execution ,each thread may execute transitions . at state , denotes the _ currently running thread _ ( or _ current thread _ ) , see fig .[ fig : opfunc ] . on fig .[ fig : states ] , the current thread of is and the current thread of is . during the program execution given in fig .[ fig : states ] , creates .we say that is a _ child _ of and is the _ parent _ of .furthermore , creates .we then introduce the concept of _ descendant _ : the thread is a descendant of because it has been created by which has been created by .more precisely , descendants depend on genealogies .consider the state with ] .when the execution of the program reaches the state , the set of descendants of from is . in a genealogy, there are two important pieces of information .first , there is a tree structure : a thread creates children that may creates children and so on ... second , there is a global time , e.g. , in , the thread has been created before the thread .[ lemma : descendant ] let a genealogy and , which are not created in .therefore , either or .we prove this lemma by induction on . if , then .let us consider the case .by induction hypothesis either or . in the first case ,if , therefore and , else . in the second case ,let us consider the subcase .therefore .in addition to this , is not created in ( a thread can not be created twice in a genealogy ) , therefore . hence and .the subcase is similar .let us consider the subcase . therefore and .we also need to consider sub - genealogies such as . in this partial genealogy, has not been created by .hence .notice that even though the creation of is in the genealogy . during an execution ,after having encountered a state we distinguish two kinds of descendants of : [ enum : past]those which already exist in state ( except itself ) and their descendants , [ enum : future] and its other descendants .each thread of kind ( [ enum : past ] ) has been created by a statement executed by .we call the states from which a thread of kind ( [ enum : future ] ) can execute a transition . in fig .[ fig : states ] , the thick lines describe all the states encountered while executing the program that fall into . the following lemma explicits some properties of : [ lemma : f+ ] let a set of transitions .let therefore : 1 . if then 2 .if then let and .by definition of transitions , there exists such that . because , .therefore , if , i.e. , , then ( by definition of ) .let us assume that .let .therefore , there exists such that and . because , by definition , .therefore .according to lemma [ lemma : descendant ] , .hence and therefore . when a schedule transition is executed , the current thread change .the futur descendants of the past current thread and the new current thread are diffents .this is formalized by the following lemma : [ lemma : f- ] if then .let and .therefore .let .by definition of , there exists such that , and .furthermore and are in .therefore and are either created in , or are .hence , and can not be created in .therefore , and therefore .using lemma [ lemma : descendant ] we conclude that .this is a contradiction with and . during the execution of a set of transition do not create threads , the set of descendants does not increase : [ lemma : ndesce ] let a set of transitions such that : + for all .+ let , and . if then .let a sequence of states such that , for all , , and .let . if then , and then and then .therefore , in all cases and then , by straightforward induction , .[ lemma : ndescew ] let a set of transitions such that : + for all .+ let and . if then .apply lemma [ lemma : ndesce ] with .these lemmas has a consequence on : [ lemma : e ] let a set of transitions such that : + for all .+ if and then .let and .by lemma [ lemma : ndescew ] and by definition of , .[ lemma : h ] let a set of transitions such that : + for all .+ let a set of transitions .let three states such that , and . if then .let , and . by lemma[ lemma : ndescew ] and by definition of , . therefore . because , , therefore . hence .let us recall some classical definitions. for any binary relation on states let be the _ restriction _ of to and be the _ application _ of on . is the _ composition _ of and .let where and . finally , for any set of states , let be the _ complement _ of .the definition of the semantics of a statement requires some intermediate relations and sets .the formal definition is given by the following definition : let us read together , on some special cases shown in fig .[ fig : illusconcr ] .this will explain the rather intimidating of definition [ def : concrsem ] step by step , introducing the necessary complications as they come along .the statement is executed between states and .figure describes the single - thread case : there is no thread interaction during the execution of .the thread is spawned after the execution of the statement .e.g. , in fig .[ subfig : create ] , y:=0;\nlref{b2} ] , then by lemma [ lemma : f+ ] , and . furthermore , by lemma [ lemma : f ] , .the following proposition show that collect all transitions generated by a statement .[ proposition : guarantee ] let a concrete configuration , a statement and .let and such that .if ^{\star} ] .let an arbitrary integer .then , let the smallest ( if it exists ) such that . then , by definition , .basic statement have common properties , therefore , we will study them at the same time .proposition [ prop : basic ] explain how to overapproximate the semantics of a basic statement .it will be used in the abstract semantics .an execution path of a basic statement can be decomposed in interferences , then one transition of the basic statement , and then , some other interferences .the following lemma show this .this lemma will allow us to prove proposition [ prop : basic ] .[ lemma : c]let be a basic statement , + and .let then : * either and , * or + and let us consider the case . by definition of , . therefore . by lemma [ lemma : b ] , , hence , .let us consider the case because , ^{\star} ] .let a sequence of states such that and and for all , .notice that and therefore . by lemma [ lemma : e ] ,therefore . by lemma [ lemma : a ] , .let the smallest ( if it exists ) such that .therefore . by lemma [ lemma : b ] , .according to lemma [ lemma : a ] , this is a contradiction .therefore , for all , . by lemma [ lemma : a ] , , hence .therefore now , we introduce some claims on the semantics of basic statements .claims [ claim : suv ] and [ claim : sudv ] say that when a basic statement is executed , only one thread is executed .notice that creates a subthread , but does not execute it .the claim [ claim : sev ] caracterizes the transitions done by the current thread .the claim [ claim : bs ] gives an overapproximation of , the set of states reached at the end of the execution of a basic statement .[ claim : suv ] let a basic statement and .therefore , .let .therefore , .so , there exists and such that , and .hence , by lemma [ lemma : f+ ] , . given that , .but , because , .there is a contradiction .hence .[ claim : sudv ] let a basic statement and .therefore , .let .there exists and such that , and .let and .because , .let .. therefore and . by lemma [ lemma : e ] ,hence .let . by definition of and a straightforward induction on , . because , then . by lemma [ lemma : f+ ] ,this is contradictory with .hence .[ claim : sev ] let a basic statement and . + therefore , .. then and .then , there exists such that . because , by lemma [ lemma : a ] , . by lemma [ lemma : c ] , . because , .[ claim : bs ] let a basic statement , and .+ therefore , .let .therefore , and there exists such that . because , according to lemma [ lemma : c ] , [ prop : basic ] let be a basic statement , then : where + and this proposition is a straightforward consequence of claims [ claim : suv ] , [ claim : sudv ] , [ claim : sev ] and [ claim : bs ] . the next theorem shows how the semantics can be over - approximated by a denotational semantics , and is the key point in defining the abstract semantics .[ theorem : denot ] 1 .[ fcomposition] 2 .[ fif] 3 .[ fwhile ] + with 4 .[ fcreate] + with while points [ fcomposition ] and [ fwhile ] are as expected , the overapproximation of semantics of ( point [ fcreate ] ) computes interferences which will arise from executing the child and its descendants with and then combines this result with the configuration of the current thread .this theorem will be proved later .the following proposition consider a statement set of transition .the only constraint on is on the use of labels of .the proposition consider an execution of the statement from a state to a state , and , after , an execution of other commands .the labels of mays only be used : * for interferences , * or by the statement , * after having applied the statement , i.e. , after . .this proposition ensures us that any transition executed by a thread created during the execution of ( i.e. , between and ) is a transition generated by the statement .[ prop : apres1 ] let a statement , + .let and a set of transitions such that for all , if then or .let a sequence of states such that for all , .therefore , if then either or let for all , let .let us show by induction on that for all , if then .let and .therefore . given that , by lemma [ lemma : r ] , .by induction hypothesis , for all , if then .let . if , therefore , .furthermore , by induction hypothesis , . by definition of , . by lemma [ lemma : a ] , . if , then , .hence , as above , . hence , according to lemma [ lemma : asub ] , .else , by definition of a transition , .let such that , hence , either , or .in the last case , and therefore . hence , by definition of , .[ lemma : g ] in this section , we consider an initial configuration : and a sequence .we write and and define : + + + + + + [ lemma : g+ ] if and then . if and then .let us consider that . hence because labels of are pairwise distinct , .by lemma [ lemma : abis ] , .hence , by lemma [ lemma : g ] , the case is similar .[ lemma : cbis ] using the above notations , for every such that , * either and * or there exists such that , let . either or . in the first case , either , or .if , then , by definition , . by definition , and {s_0}{s }we just have to choose . in the second case , .let . since , and .furthermore , so .since ^{\star } ] .recall , hence ^{\star} ] since , . since , .furthemore , so , according to lemma [ lemma : f ] , .given that , according to lemma [ lemma : abis ] , .hence . because the labels of are pairwise distincts , .using lemma [ lemma : f ] , we conclude that .given that and and , we conclude that .furthermore and , therefore .^{\star} ] .recall that ^{\star} ] .therefore , by lemma [ lemma : f ] , if , then . therefore , because labels are pairwise distinct , if , then .therefore , by lemma [ lemma : abis ] , if , then .hence , ^{\star} ] .therefore .[ lemma : cter ] using the above notations , for every such that and , there exists such that , and {s_0}{s_1} ] [ lemma : ext1 ] using the notations of this section , let such that , {s_0}{s_1} ] .notice that , by lemma [ lemma : f+ ] , . recall that : ^{\star} ] by lemma [ lemma : g ] , ^{\star} ] . by proposition[ prop : apres1 ] , ^{\star} ] .hence {s_0}{s_1};\ext[_1]{s_0}{s_1}=\ext[_1]{s_0}{s_1 } ] and .therefore {s_1}{s_2} ] {s_1}{s_2}=\big [ ( \restrict{\concr{g_1}}{\after(s_1)}\cap\tr_2)\cup \restrict{\concr{a_1}}{\compl{\after(s_1 ) } } \cup \restrict{\concr{g_1}}{\after(s_2 ) } \big]^{\star} ] .hence , by proposition [ prop : apres1 ] applied on the statement , for all , .given that and , by proposition [ prop : apres1 ] applied on the statement , we conclude that for all , .let such that .by lemma [ lemma : ext1 ] , {s_0}{s_1} ] .because , we conclude that {s_1}{s_2} ] .hence {s_0}{s_1};\sche=\ext[_1]{s_0}{s_1} ] , so by proposition [ prop : apres1 ] , .hence , .[ lemma : seq : sub ] using the notations of this section .let .then , there exists and such that and . according to lemma [ lemma : cter ] ,there exists such that and and {s_0}{s_1} ] and {s_1}{s_2} ] 3 . or there exists such that {s_0}{s_1} ]. therefore , there exists and such that , and ^{\star } ] .hence , by proposition [ prop : apres1 ] , either or .if and then .this is contradictory with claim [ claim : sudv ] .therefore . by lemma [ lemma : f ] , . hence , by lemmas [ lemma : a ] and [ lemma : gif ] , .we conclude that ^{\ast}{\subseteq}\col_1\cap \ext[_{{+}}]{s_0}{s_1} ] , or , and {s_0}{s_1} ] , or , and {s_0}{s_1} ] , or , and {s_0}{s_1} ] , or , and {s_0}{s_1} ] and .hence , . therefore .in addition to this , according to lemma [ lemma : r ] , , so , for all , .[ lemma : cw ] using the notations of this section , if , then , there exists such that : 1 .either , 2 . or there exists such that and and . let .we consider a sequence of minimal length such that the following properties hold : , , for all , .a such sequence exists because . if for all , then .let us assume , from now , that there exists such that .let the smallest such .therefore , so , .according to lemma [ lemma : e ] , . by lemma [ lemma : b ] , .but , therefore , by lemma [ lemma : abis ] , .therefore , by lemma [ lemma : gw ] , either or . in the first case , by lemma [ lemma : a ] , .let us prove by induction on that for all , .by induction hypothesis ^{\star} ] .property [ fwhile ] of theorem [ theorem : denot ] is a straightforward consequence of claims [ claim : ws ] , [ claim : wse ] , [ claim : wsu ] and [ claim : wsud ] .let a configuration .+ let + let + let + let + let + let + let + let let [ lemma : gcr ] [ lemma : apresc0 ] let a set of transitions .let , , , and such that , , , and .therefore , . according to lemma [ lemma : c ], there exists and such that , , , and . by lemmas [ lemma : e ] and [ lemma : a ] , .let and .let , , , and such that , respectively , the genealogy of , , , , , is , , , , , .notice that and have the same genealogy . because ^{\ast} ] , by lemma [ lemma : ndescew ] , . by definition of , =\desce_{\h}\{i_0,j\} ] , and . furthermore and .if ^{\ast} ] .therefore , there exists and such that ^{\ast} ] .due to lemma [ lemma : e ] , because , .according to lemma [ lemma : a ] , and . therefore .let .let and such that .let .therefore , and .let and .therefore , .given that , we conclude that ^{\star} ] .then {s_0}{s_1} ] .therefore , by proposition [ proposition : guarantee ] , . .let .therefore there exists such that and . according to lemma [ lemma : ccr ]there exists such that , and .therefore and . .let . according to lemma [ lemma : a ] , .there exists such that .therefore , according to lemma [ lemma : ccr ] , . therefore and , by lemma [ lemma : b ] , . due to lemmas [ lemma : abis ] and [ lemma : gcr ] , .hence . .let .therefore , there exists such that and .notice that by definition of , .assume by contradiction , that . due to lemma[ lemma : e ] , .this is contradictory .hence , by lemma [ lemma : ccr ] , there exists such that , , , , , and . hence , , . according to lemma [ lemma : f- ] .given that , .hence , du to lemma [ lemma : apresw ] , .if , then and . if , then . .let .there exists such that and and . by lemma [ lemma : ccr ], there exists such that , , {s_0}{s_1} ] and by lemma [ lemma : apresc0 ] , ^{\ast} ] . by lemma [ lemma : f+ ] , , therefore ^{\ast} ] .let and .therefore , . if , then and . if , then and .[ lemma : init ] for all and , . in particular , if is the set of initial states of a program and , then .the following proposition shows the connection between the operational and the semantics .[ prop : coroc ] consider a program and its set of initial states .let : then : we only have to prove that .let .there exists such that by proposition [ proposition : guarantee ] , by lemma [ lemma : init ] , .hence .it is straightforward to check that .recall that is the set of states that occur on paths starting from . represents all final states reachable by the whole program from an initial state . represents all transitions that may be done during any execution of the program and represents transitions of children of .recall from the theory of abstract interpretation that a _ galois connection _ between a concrete complete lattice and an abstract complete lattice is a pair of monotonic functions and such that ; is called the _ abstraction _ function and the _ concretization _ function .product lattices are ordered by the product ordering and sets of functions from to a lattice are ordered by the pointwise ordering .a monotonic function is an _ abstraction _ of a monotonic function if and only if .it is a classical result that an adjoint uniquely determines the other in a galois connection ; therefore , we sometimes omit the abstraction function ( lower adjoint ) or the concretization function ( upper adjoint ) .our concrete lattices are the powersets and ordered by inclusion .remember , our goal is to adapt any given single - thread analysis in a multithreaded setting .accordingly , we are given an abstract complete lattice of abstract states and an abstract complete lattice of abstract transitions .these concrete and abstract lattices are linked by two galois connections , respectively and .we assume that abstractions of states and transitions depend only on stores and that all the transitions that leave the store unchanged are in .this assumption allows us to abstract and as the least abstract transition .we also assume we are given the abstract operators of table [ fig : abstractions ] , which are correct abstraction of the corresponding concrete functions .we assume a special label which is never used in statements .furthermore , we define .we define a galois connection between and : and ( by convention , this set is when ) .the set represents the set of labels that may have been encountered before reaching this point of the program .note that we have two distinct ways of abstracting states , either by using , which only depends on the store , or by using which only depends on the genealogy and the current thread .the latter is specific to the multithreaded case , and is used to infer information about possible interferences .just as was not enough to abstract states in the multithreaded setting , is not enough , and lose the information that a given transition is or not in a given .this information is needed because is used in theorem [ theorem : denot ] and fig .[ concrfcts ] .let us introduce the following galois connection between the concrete lattice and the abstract lattice , the product of copies of , to this end : + .+ is an abstraction of the `` guarantee condition '' : represents the whole set , and represents the interferences of a child with its parent , i.e. , abstracts ._ abstract configurations _ are tuples such that and .the meaning of each component of an abstract configuration is given by the galois connection : abstracts the possible current stores . abstracts the labels encountered so far in the execution . is an abstraction of interferences . as an application, we show some concrete and abstract stores that can be used in practice .we define a galois connection between concrete and abstract stores and encode both _abstract states _ and _ abstract transitions _ as abstract stores , i.e. , .abstract states are concretized by : [ [ non - relational - store ] ] non - relational store + + + + + + + + + + + + + + + + + + + + such a store is a map from the set of variables to some set of _ concrete values _ , and abstract stores are maps from to some complete lattice of _ abstract values_. given a galois connection between and , the following is a classical , so called non - relational abstraction of stores : let and be the abstract value of the expression and the set of variables that may be represented by , respectively , in the context .}\\ { \aecr{lv:=e}(\abstr{c})}&{\stackrel{\text{\tiny def}}{= } } & \bigcup_{x\in \addr{\abstr{c}}{lv } } { \aecr{x:=e}(\abstr{c})}\\ { { \abstr{write\text{-}inter}_{lv:=e}}(\abstr{c } ) } & { \stackrel{\text{\tiny def}}{=}}&{\lambda x. \text{if } x\in\addr{\abstr{c}}{lv } \text { then } \valeur{\abstr{c}}{e}\text { else } \bot } \\ \acinter{\abstr{i}}(\abstr{c})&{\stackrel{\text{\tiny def}}{=}}&\abstr{i}\sqcup\abstr{c}\\ { \abstr{enforce}}_{x}({\sigma})&{\stackrel{\text{\tiny def}}{=}}&{{\sigma}[x \fleche { \mathit{true}}^{\sharp}]}\text { and } { \abstr{enforce}}_{\neg x}({\sigma})={{\sigma}[x \fleche { \mathit{false}}^{\sharp } ] } \end{aligned}\ ] ] [ [ genkill - analyses ] ] gen /kill analyses + + + + + + + + + + + + + + + + + in such analyses , stores are sets , e.g. , sets of initialized variables , sets of edges of a point - to graph .the set of stores is for some set , , and the abstraction is trivial .each gen / kill analysis gives , for each assignment , two sets : and . these sets may take the current store into account ( e.g. rugina and rinard s `` strong flag '' ) ; ( resp . ) is monotonic ( resp . decreasing ) in .we define the concretization of transitions and the abstract operators : + [ lemma : l ] .[ lemma : ll ] .[ lemma : kunion ] let and two set of transitions and .+ hence , the functions of fig .[ fig : basicabstract ] abstract the corresponding functions of the semantics ( see fig . [ concrfcts ] ) .[ prop : abstract ] the abstract functions , , , , and are abstractions of the concrete functions , , , , and respectively .the cases of and are straightforward .the case of is a straightforward consequence of lemma [ lemma : ll ] .let an abstract configuration and . therefore .let and .therefore , by definition , . by proposition[ prop : basic ] , .hence . according to proposition [ prop : basic ] , with .hence .therefore by lemma [ lemma : kunion ] : + if then , .therefore , by lemma [ lemma : l ] , .hence .given that and , we prove in the same way that is an abstraction of . given that and , we prove in the same way that is an abstraction of .the function updates by adding the modification of the store to all labels encountered so far ( those which are in ) .it does not change because no thread is created .notice that in the case of a non - relational store , we can simplify function using the fact that } ] and and and .2 . the configuration is computed . where ] .the and componnents are not changed because no new thread is created .[ item : child - spawn ] the configuration is computed . where and .notice that because the equality holds .4 . the configuration is computed . where ] .5 . the configuration is computed . . ] and ] and ] and ]the abstract semantics is denotational , so we may compute it recursively .this requires to compute fixpoints and may fail to terminate . for this reason , each time we have to compute we compute instead the overapproximation , where is a widening operator , in the following way : assign [ algo : whilebody ] compute if returns , otherwise , assign and go back to [ algo : whilebody ] .our final algorithm is to compute recursively applied to the initial configuration , overapproximating all fixpoint computations .we have implemented two tools , and , in ocaml with the front - end c2newspeak , with two different abstract stores .the first one maps variables to integer intervals and computes an overapproximation of the values of the variables .the second one extends the analysis of allamigeon et al . , which focuses on pointers , integers , c - style strings and structs and detects array overflows .it analyzes programs in full fledged c ( except for dynamic memory allocation library routines ) that use the pthreads multithread library .we ignore mutexes and condition variables in these implementations .this is sound because mutexes and condition variables only restrict possible transitions .we lose precision if mutexes are used to create atomic blocks , but not if they are used only to prevent data - races . in table [ table : benchmarks ] we show some results on benchmarks of differents sizes .means `` lines of code '' .`` '' is a c file , with 3 threads : one thread sends an integer message to another through a shared variable .`` '' is extracted from embedded c code with two threads . `` '' and `` '' are sets of 12 and 15 files respectively , each one focusing on a specific thread interaction .to give an idea of the precision of the analysis , we indicate how many false alarms were raised .our preliminary experiments show that our algorithm loses precision in two ways : through the ( single - thread ) abstraction on stores by abstraction on interferences . indeed , even though our algorithm takes the order of transitions into account for the current thread , it considers that interference transitions may be executed in an arbitrary order and arbitrary many times .this does not cause any loss in `` '' , since the thread which send the message never put an incorrect value in the shared variable .despite the fact that `` '' is a large excerpt of an actual industrial code , the loss of precision is moderate : 7 false alarms are reported on a total of 27 100 lines .furthermore , because of this arbitrary order , our analysis straightforwardly extends to models with `` relaxed - consistency '' and `` temporary '' view of thread memory due to the use of cache , e.g. , openmp .the complexity of our algorithm greatly depends on widening and narrowing operators . given a program , the _ slowness _ of the widening and narrowing in an integer such that : widening - narrowing stops in always at most steps on each loop and whenever is computed ( which also requires doing an abstract fixpoint computation ) .let the _ nesting depth _ of a program be the nesting depth of and of which needs a fixpoint computation , except with no subcommand .] have a subcommand .[ prop : complexity ] let be the nesting depth , the number of commands of our program , and , the slowless of our widening .the time complexity of our analysis is assuming operations on abstract stores are done in constant time .this is comparable to the complexity of the corresponding single - thread analysis , and certainly much better that the combinatorial explosion of interleaving - based analyses .furthermore , this is beter than polynomial in an exponential number of states .let , and and be the complexity of analyzing , the size of and the nesting depth of , the slowless of the widening and narrowing on respectively .let and the complexity of assign and of reading respectively .proposition [ prop : complexity ] is a straightforward consequence of the following lemma : the complexity of computing is this lemma is proven by induction .+ + + + if does not contain any subcommand , then the fixpoint computation terminates in one step : + else : notice that we have assumed that operation on are done in constant time in proposition [ prop : complexity ] . this abstract store may be represented in different ways .the main problem is the complexity of the function , which computes a union for each element in .the naive approach is to represent as a map from to . assuming that operations on maps are done in constant time, this approach yields a complexity where is the number of in the program .we may also represent as some map from to such that and the function is done in constant time : } , \abstr{i}\rangle ] where and .+ this widening never widen more than two times on the same variable . therefore this naive widening is linear in the worst case .our technique also applies to other forms of concurrency , fig .[ mfeatures ] displays how rugina and rinard s constructor would be computed with our abstraction .correctness is a straightforward extension of the techniques described in this paper .our model handle programs that use and .then , it can handle openmp programs with `` parallel '' and `` task '' constructors .we have described a generic static analysis technique for multithreaded programs parametrized by a single - thread analysis framework and based on a form of rely - guarantee reasoning . to our knowledge , this is the first such _ modular _ framework : all previous analysis frameworks concentrated on a particular abstract domain .such modularity allows us to leverage any static analysis technique to the multithreaded case .we have illustrated this by applying it to two abstract domains : an interval based one , and a richer one that also analyzes array overflows , strings , pointers . both have been implemented .we have shown that our framework only incurred a moderate ( low - degree polynomial ) amount of added complexity .in particular , we avoid the combinatorial explosion of all interleaving based approaches .our analyses are always correct , and produce reasonably precise information on the programs we tested .clearly , for some programs , taking locks / mutexes and conditions into account will improve precision .we believe that is an orthogonal concern : the non - trivial part of our technique is already present _ without _ synchronization primitives , as should be manifest from the correctness proof of our semantics .we leave the integration of synchronisation primitives with our technique as future work. however , locks whose sole purpose are to prevent data races ( e.g. ensuring that two concurrent accesses to the same variable are done in some arbitrary sequential order ) have no influence on precision . taking locks into account may be interesting to isolate atomic blocks .we thank jean goubault - larrecq for helpful comments .a. min , field - sensitive value analysis of embedded c programs with union types and pointer arithmetics , in : acm sigplan lctes06 , acm press , 2006 , pp .5463 , http://www.di.ens.fr/~mine/publi/article-mine-lctes06.pdf .x. allamigeon , w. godard , c. hymans , static analysis of string manipulations in critical embedded c programs , in : k. yi ( ed . ) , static analysis , 13th international symposium ( sas06 ) , vol .4134 of lecture notes in computer science , springer verlag , seoul , korea , 2006 , pp . 3551 .b. steensgaard , points - to analysis in almost linear time , in : popl 96 : proceedings of the 23rd acm sigplan - sigact symposium on principles of programming languages , acm press , new york , ny , usa , 1996 , pp . 3241 . http://dx.doi.org/http://doi.acm.org/10.1145/237721.237727 [ ] .a. min , a new numerical abstract domain based on difference - bound matrices , in : pado ii , vol .2053 of lncs , springer - verlag , 2001 , pp .155172 , http://www.di.ens.fr/~mine/publi/article-mine-padoii.pdf .p. lammich , m. mller - olm , precise fixpoint - based analysis of programs with thread - creation and procedures , in : l. caires , v. t. vasconcelos ( eds . ) , concur , vol .4703 of lecture notes in computer science , springer , 2007 , pp .287302 .p. pratikakis , j. s. foster , m. hicks , locksmith : context - sensitive correlation analysis for race detection , in : pldi 06 : proceedings of the 2006 acm sigplan conference on programming language design and implementation , acm press , new york , ny , usa , 2006 , pp .l. fajstrup , e. goubault , m. rauen , detecting deadlocks in concurrent systems , in : concur 98 : proceedings of the 9th international conference on concurrency theory , springer - verlag , london , uk , 1998 , pp .332347 .l. o. andersen , http://repository.readscheme.org/ftp/papers/topps/d-203.ps.gz[program analysis and specialization for the c programming language ] , ph.d .thesis , diku , university of copenhagen ( may 1994 ) .http://repository.readscheme.org / ftp / papers / topps / d-203% .ps.gz[http://repository.readscheme.org/ ftp / papers / topps / d-203% .ps.gz ] a. venet , g. brat , precise and efficient static array bound checking for large embedded c programs , in : pldi 04 : proceedings of the acm sigplan 2004 conference on programming language design and implementation , acm press , new york , ny , usa , 2004 , pp .http://dx.doi.org/http://doi.acm.org/10.1145/996841.996869 [ ] .e. yahav , verifying safety properties of concurrent java programs using 3-valued logic , in : popl 01 : proceedings of the 28th acm sigplan - sigact symposium on principles of programming languages , acm press , new york , ny , usa , 2001 , pp . 2740 . http://dx.doi.org/http://doi.acm.org/10.1145/360204.360206 [ ] .
a great variety of static analyses that compute safety properties of single - thread programs have now been developed . this paper presents a systematic method to extend a class of such static analyses , so that they handle programs with multiple posix - style threads . starting from a pragmatic operational semantics , we build a denotational semantics that expresses reasoning _ la _ assume - guarantee . the final algorithm is then derived by abstract interpretation . it analyses each thread in turn , propagating interferences between threads , in addition to other semantic information . the combinatorial explosion , ensued from the explicit consideration of all interleavings , is thus avoided . the worst case complexity is only increased by a factor compared to the single - thread case , where is the number of instructions in the program . we have implemented prototype tools , demonstrating the practicality of the approach .
wishart random matrices are named after john wishart who worked out their distribution in 1928 .wishart distribution generalizes the -distribution to the case of multiple variables . since their inception, wishart matrices have played a prominent role in the area of multivariate statistics . in recent yearsthere has been a renewed and growing interest in their study because of their applicability in analyzing a variety of unrelated complex problems .for instance , on the one hand wishart matrices have been implemented to analyze financial data . on the other handthey have been used to identify vulnerable regions in the human immunodeficiency virus ( hiv ) , which could lead to effective aids vaccines or drugs .further examples , where wishart matrices appear include telecommunication networks , quantum chromodynamics , quantum entanglement problem , mesoscopic systems , gene expression data analysis , etc .random matrix ensembles involving various combinations of wishart matrices are also relevant to several problems .the jacobi or manova ( multivariate analysis of variance ) ensemble is an example which is useful in the quantum conductance problem , and optical fiber communication studies .very recently several results involving the product of wishart matrices have appeared in the literature .these ensembles find applications in telecommunication of multi - layered scattering multiple - input and multiple - output channels .another important ensemble which plays a crucial role in the multivariate statistics comprises sum of wishart matrices .they arise in matrix quadratic forms , manova random effects model , and robustness studies involving mixtures of multivariate gaussian distributions .the distribution of sum of wishart matrices serves as a natural candidate distribution for modeling realized covariance and is of fundamental importance to the multivariate behrens - fisher problem .moreover , it has applications in quantitative finance , telecommunication , sensor network related algorithms , etc .the sum of independent wishart matrices taken from distributions with identical covariance matrices gives rise , again , to a wishart distribution with the same covariance matrix ; see eq . ahead .however , for the case of unequal covariance matrices , deriving the distribution of the sum of wishart matrices becomes extremely difficult and impractical . even in the case of two wishart matrices ,the distribution of sum involves a hypergeometric function with matrix arguments .this complicated and rotationally - noninvariant nature of the matrix distribution makes the evaluation of statistics of eigenvalues an intractable task . in the present workwe take the first steps towards solving this problem and consider the sum of two independent complex wishart matrices associated with unequal covariance matrices , such that one of the covariance matrices is proportional to the identity matrix , while the second one is arbitrary . to tackle this problemwe employ a generalization of the harish - chandra - itzykson - zuber unitary - group integral .we derive compact results for the joint probability density of eigenvalues , as well as the marginal density which involves easily evaluable determinantal structure .the analytical predictions are verified by numerical simulations , and we find excellent agreements .let us consider two independent complex matrices and of dimensions and taken , respectively , from the distributions here ` ' and ` ' represent the trace and the determinant , respectively , and ` ' denotes the hermitian - conjugate . , are the covariance matrices .we assume that .we have \,\mathcal{p}_a(a)=\int d[b]\ , \mathcal{p}_b(b)=1 ] , with and representing the real and imaginary parts , respectively .similar definition is to be understood for $ ] .since the domains of and remain invariant under unitary rotation , without loss of generality , we may take and as diagonal matrices .we consider and .the matrices and are then -variate complex - wishart - distributed , i.e. , and ; being the respective degrees of freedom .we are interested in the statistics of the ensemble of dimensional hermitian matrices the distribution of can be obtained as \int\ !d[b ] \delta(h - aa^\dag -bb^\dag ) \mathcal{p}_a(a ) \mathcal{p}_b(b).\ ] ] the delta function with matrix argument in the above equation represents the product of delta functions with scalar arguments , one for each independent real and imaginary component of .using the fourier representation for delta function we can write \int d[a]\int d[b ] e^{i { \text{\,tr\,}}(k(h - aa^\dag -bb^\dag ) ) } e^{-{\text{\,tr\,}}(a^\dag \sigma_a^{-1 } a ) } \,e^{-{\text{\,tr\,}}(b^\dag \sigma_b^{-1 } b)}.\ ] ] here is an dimensional matrix with the same symmetry properties as , i.e. , it is hermitian .the gaussian integrals over and can be performed trivially and result in e^{i { \text{\,tr\,}}(k h)}\det\!\ ! \,^{-n_a}(\sigma_a^{-1}+i k ) \det\!\!\,^{-n_b}(\sigma_b^{-1}+i k).\ ] ] as shown in the appendix , this can be brought to the form where , and is the following matrix integral involving the jacobi ensemble : \det\!\,^{n_a - n}({\mathds{1}_n}-t)\,\det\!\,^{n_b - n}t e^{{\text{\,tr\,}}((\sigma_a^{-1}-\sigma_b^{-1})ht)}.\ ] ] here is an dimensional hermitian matrix . if the covariance matrices happen to be equal , i.e. , , then gives just a constant and we obtain showing that is complex - wishart - distributed as .exact as well as asymptotic results for various eigenvalue statistics are known for this case . in the general case can be represented in terms of a confluent hypergeometric function of matrix argument , the second line in the above equation follows from the kummer s transformation .therefore , we obtain the distribution of as the normalization can be fixed by keeping track of all the constants from the beginning .we have where is the gamma function .. constitutes one of the key results of this paper . in the case of identical covariance matrices gives 1 , and thereby we recover eq . .we remark that the distribution of in the case of real matrices can also be obtained using the same procedure .we now specialize to the case when one of the covariance matrices is proportional to the identity matrix , say , while the second , , is arbitrary .equivalently , we may consider and an arbitrary in the second expression in eq . .for the former choice , the factor before the hypergeometric function in eq . becomes unitarily invariant . using the eigenvalue - decomposition , where is the diagonal matrix with the eigenvalues of , we obtain here is the vandermonde determinant and represents the haar measure over the unitary group .the above group integral can be performed using the result below , and leads to a hypergeometric function of two matrix arguments , this result is a generalization of the celebrated harish - chandra - itzykson - zuber unitary group integral .we have the following representation for in terms of a determinant involving the eigenvalues and of normal matrices and : }{\delta_n(\{x\})\delta_n(\{y\})},\\\ ] ] where inside the determinant is the usual confluent hypergeometric function with scalar arguments . using eqs . and in eq ., we obtain the joint probability density of the eigenvalues of as {j , k=1, ... ,n}.\ ] ] here is the normalization constant , and it is worth mentioning that the confluent hypergeometric function in eq . can be represented in terms of more elementary functions .noting that , we have where represents the binomial coefficient , and and are the beta function and the lower incomplete gamma function , respectively .this simplifies further for special cases or parameter values .for instance , gives which also includes the case .to evaluate the normalization constant in eq . , we expand the vandermonde determinant as well as the determinant involving the hypergeometric functions and perform the integral over the eigenvalues using the relation which holds whenever the integral is convergent .the expression obtained afterwards can be reformulated as a determinant .we obtain {j , k=1, ... ,n},\ ] ] such that when the s have multiplicity greater than 1 , i.e. , if some or all of the s are identical , then the determinants in eqs . and become zero . in such_ degenerate _ cases the appropriate result can be obtained by a limiting procedure .eq . is another important contribution of this work .1 shows the joint probability density corresponding to the case , with parameter values as indicated in the caption .the agreement between the analytical result and numerical - simulation result is excellent ..54 .41 we remark that the joint probability density given by eq .is of the form of a bi - orthogonal ensemble in the sense of borodin .such a structure , in view of the results in , implies existence of compact expression for the -point correlation function , which includes the level density . , and + .the histogram + is from the numerical simulation while the solid line is the analytical prediction ., scaledwidth=70.0% ] we now move on to calculate the marginal density of eigenvalues , which is given by and is related to the level density as . to this end , we expand the determinants in eq . and then integrate over the eigenvalues with the aid of eq .. the resulting expression can be recast in terms of the determinant of an -dimensional matrix .we have {k=1, ... ,n } \\ [ g_j(\lambda)]_{j=1, ... ,n } & [ h_{j , k}]_{j , k=1, ... ,n } \end{bmatrix};\ ] ] to enunciate the notation used above we consider , as an example , the case and write the determinant part explicitly : the normalization in eq .is given by {j , k=1, ... ,n}.\ ] ] eq .constitutes the main result of this paper .[ fig2 ] shows an example where we compare the analytical and simulation results .the parameter values are indicated in the caption .we find perfect agreement . , and ,scaledwidth=70.0% ] again ,if some or all of the s are identical , then we have to take the limit properly to obtain the appropriate expression .for instance , if all the s are equal , viz . , then is still given by eq . , but with the following modification : here = .similar definition is to be understood for .we note that the following relations hold for the derivative of the confluent and gauss hypergeometric functions with respect to the last argument : where etc .represent the pochhammer symbol with definition . in fig .[ fig3 ] we consider a _ degenerate _ case where both the covariance matrices are proportional to the identity matrix .once again the analytic and the simulation results agree perfectly .we considered the problem of computing the eigenvalue statistics of sum of two independent complex wishart matrices taken from distributions with unequal covariance matrices .we found a complete solution to the problem when one of the covariance matrices is proportional to the identity matrix .we derived a compact result for the joint probability density of eigenvalues which can be used to evaluate the statistics of any observable dependent on the eigenvalues .we also derived an easily computable determinantal expression for the marginal density of eigenvalues .these expressions can be readily implemented in mathematica . finally , we performed numerical simulations to test the analytical results and found perfect agreement .it remains to see if some compact form can be obtained for the case when both the covariance matrices are arbitrary .moreover , it will be of interest to explore if the problem involving the sum of more than two wishart matrices is analytically surmountable , and if there is some underlying deeper structure .we outline here the steps leading to eq . , starting from eq . .we introduce another delta - function in eq ., involving a new hermitian matrix and afterwards separate the ` ' and ` ' parts : \int d[\widetilde{k } ] \delta(\widetilde{k}-k ) e^{i{\text{\,tr\,}}(k h)}\det\!\!\,^{-n_a}(\sigma_a^{-1}+i k)\det\!\!\,^{-n_b}(\sigma_b^{-1}+i k)\\ & & = \int d[k]\int d[\widetilde{k } ] \delta(\widetilde{k}-k)e^{i{\text{\,tr\,}}(k h/2)}\det\!\!\,^{-n_a}(\sigma_a^{-1}+i k ) e^{i { \text{\,tr\,}}(\widetilde{k } h/2)}\det\!\!\,^{-n_b}(\sigma_b^{-1}+i \widetilde{k})\\ & & = \int d[s]\int d[k]\int d[\widetilde{k } ] e^{i{\text{\,tr\,}}(s(\widetilde{k}-k ) ) } e^{i { \text{\,tr\,}}(k h/2)}\det\!\!\,^{-n_a}(\sigma_a^{-1}+i k)e^{i{\text{\,tr\,}}(\widetilde{k } h/2)}\det\!\!\,^{-n_b}(\sigma_b^{-1}+i \widetilde{k}).\end{aligned}\ ] ] in the last step above we introduced the fourier representation for with the aid of a hermitian matrix .we now consider the transformations and .the resulting jacobians can be absorbed in the overall constant and therefore we obtain \!\int \!d[k ] e^{i { \text{\,tr\,}}(k ( h/2-s)\sigma_a^{-1})}\det\,\!\!^{-n_a}({\mathds{1}_n}+i k)\\ \times\int \!d[\widetilde{k}]e^{i { \text{\,tr\,}}(\widetilde{k } ( h/2+s)\sigma_b^{-1})}\det\!\!\,^{-n_b}({\mathds{1}_n}+i \widetilde{k}).\end{aligned}\ ] ] the and integrals can be performed using the ingham - siegel type integral , yielding \det\!\!\,^{n_a - n}(h/2-s)\,\,e^{-{\text{\,tr\,}}((h/2-s)\sigma_a^{-1})}\det\!\!\,^{n_b - n}(h/2+s)\,\,e^{-{\text{\,tr\,}}((h/2+s)\sigma_b^{-1})}\\ \times\theta((h/2-s)\sigma_a^{-1})~\theta((h/2+s)\sigma_b^{-1}).\end{aligned}\ ] ] here represents the matrix theta function , and requires the matrix to be positive definite ( ) for a non - vanishing result . employing the transformation , and observing that , we obtain \det\,^{n_a - n}({\mathds{1}_n}-s)\det\,^{n_b - n}({\mathds{1}_n}+s)\\ \times e^{{\text{\,tr\,}}((\sigma_a^{-1}-\sigma_b^{-1})hs/2)}\theta({\mathds{1}_n}-s)\theta({\mathds{1}_n}+s).\end{aligned}\ ] ] the matrix theta functions in the above expression restricts the domain of in the integration to . finally , introducing the hermitian matrix we have \det\!\!\,^{n_a - n}({\mathds{1}_n}-t)\det\!\!\,^{n_b - n}t \,e^{{\text{\,tr\,}}((\sigma_a^{-1}-\sigma_b^{-1})ht)},\end{aligned}\ ] ] and hence eq . .j. wishart , http://dx.doi.org/10.1093/biomet/20a.1-2.32[biometrika * 20a * , 32 ( 1928 ) ] .a. t. james , http://dx.doi.org/10.1214/aoms/1177703550[ann . math . statist . * 35 * , 475 ( 1964 ) ] .r. j. muirhead , _ aspects of multivariate statistical theory _ vol . 197 ( john wiley & sons , 2009 ). t. w. anderson , _ an introduction to multivariate statistical analysis _ ( john wiley & sons , 2003 ) , 3rd ed .h. h. andersen , m. hjbjerre , d. srensen and p. s. eriksen , _ linear and graphical models for the multivariate complex normal distribution _ , lecture notes in statistics , vol. 101 ( springer - verlag , new york 1995 ) .a. k. gupta and d. k. nagar , _ matrix variate distributions _104 ( crc press , 1999 ) .l. laloux , p. cizeau , j .-bouchaud , and m. potters , http://dx.doi.org/10.1103/physrevlett.83.1467[phys .lett . * 83 * , 1467 ( 1999 ) ] .v. plerou , p. gopikrishnan , b. rosenow , l. a. n. amaral , t. guhr , and h. e. stanley , http://dx.doi.org/10.1103/physreve.65.066126[phys .e * 65 * , 066126 ( 2002 ) ] .g. akemann , j. fischmann , and p. vivo , http://dx.doi.org/10.1016/j.physa.2010.02.026[physica a * 389 * , 2566 ( 2010 ) ] .t. a. schmitt , d. chetalova , r. schfer , and t. guhr , http://dx.doi.org/10.1209/0295-5075/105/38004[europhys .lett . * 105 * , 38004 ( 2014 ) ] .v. dahirel _et al . _ ,108 * , 11530 ( 2011 ) ] .i. e. telatar , http://dx.doi.org/10.1002/ett.4460100604[eur .* 10 * , 585 ( 1999 ) ] .a. m. tulino and s. verdu , _ random matrix theory and wireless communications _ , foundations and trends com . and( now publishers inc , boston , delft , 2004 ) .s. h. simon , a. l. moustakas , and l. marinelli , http://dx.doi.org/10.1109/tit.2006.885519[ieee trans .theory * 52 * , 5336 ( 2006 ) ] .a. zanella , m. chiani , and m. z. win , http://dx.doi.org/10.1109/tcomm.2009.04.070143[ieee trans . commun . * 57 * 1050 ( 2009 ) ] .s. kumar and a. pandey , http://dx.doi.org/10.1109/tit.2010.2044060[ieee trans .theory * 56 * , 2360 ( 2010 ) ] .s. kumar and a. pandey , http://dx.doi.org/10.1016/j.aop.2011.04.013[ann .* 326 * , 1877 ( 2011 ) ] .shuryak and j. j. m. verbaarschot , http://dx.doi.org/10.1016/0375-9474(93)90098-i[nucl .a * 560 * , 306 ( 1993 ) ] .j. verbaarschot , http://dx.doi.org/10.1103/physrevlett.72.2531[phys .lett . * 72 * , 2531 ( 1994 ) ] .t. guhr and t. wettig , http://dx.doi.org/10.1016/s0550-3213(97)00556-7[nucl .b * 506 * , 589 ( 1997 ) ] .j. j. m. verbaarschot and t. wettig , http://10.1146/annurev.nucl.50.1.343[ann .sci . * 50 * , 343 ( 2000 ) ] .g. akemann , http://dx.doi.org/10.5506/aphyspolb.42.901[acta phys . pol .b * 42 * , 0901 ( 2011 ) ] .k. zyczkowski and h .- j .sommers , http://dx.doi.org/10.1088/0305-4470/34/35/335[j .a * 34 * , 7111 ( 2001 ) ] . j. n. bandyopadhyay and a. lakshminarayan , http://dx.doi.org/10.1103/physrevlett.89.060402[phys .lett . * 89 * , 060402 ( 2002 ) ] . c. nadal , s. n. majumdar , and m. vergassola , http://dx.doi.org/10.1103/physrevlett.104.110501[phys .lett . * 104 * , 110501 ( 2010 ) ] .s. kumar and a. pandey , http://dx.doi.org/10.1088/1751-8113/44/44/445301[j .phys a. * 44 * , 445301 ( 2011 ) ] .vinayak and m. nidari , http://dx.doi.org/10.1088/1751-8113/45/12/125204[j .a * 45 * , 125204 ( 2012 ) ] . p.j. forrester and t. d. hughes , http://dx.doi.org/10.1063/1.530639[j . math .phys . * 35 * , 6736 ( 1994 ) ] .k. slevin and t. nagao , http://dx.doi.org/10.1103/physrevb.50.2380[phys .b * 50 * , 2380 ( 1994 ) ] .n. s. holter , m. mitra , a. maritan , m. cieplak , j. r. banavar , and n. v. fedoroff , http://dx.doi.org/10.1073/pnas.150242097[proc .* 97 * , 8409 ( 2000 ) ] .o. alter , p. o. brown , and d. botstein , http://dx.doi.org/10.1073/pnas.97.18.10101[proc .97 * 10101 ( 2000 ) ] .i. dumitriu and a. edelman , http://dx.doi.org/10.1063/1.1507823[j .43 , 5830 ( 2002 ) . ]p. j. forrester , http://dx.doi.org/10.1088/0305-4470/39/22/004[j .a * 39 * , 6861 ( 2006 ) ] .s. kumar and a. pandey , http://dx.doi.org/10.1088/1751-8113/43/8/085001[j .a * 43 * , 085001 ( 2010 ) ] .r. dar , m. feder , and m. shtaif , http://dx.doi.org/10.1109/tit.2012.2233860[ieee trans .theory * 59 * , 2426 ( 2013 ) .] g. akemann , j. r. ipsen , and m. kieburg , http://dx.doi.org/10.1103/physreve.88.052118[phys .e * 88 * , 052118 ( 2013 ) ] .g. akemann , m. kieburg , and l. wei , http://dx.doi.org/10.1088/1751-8113/46/27/275205 [ j. phys .a * 46 * , 275205 ( 2013 ) ] . c. g. khatri , http://dx.doi.org/10.1214/aoms/1177699530[ann .math . statist . * 37 * , 468 ( 1966 ) ] .w. y. tan and r. p. gupta , http://dx.doi.org/10.1080/03610928308828625[commun .statist . - theory meth .* 12 * , 2589 ( 1983 ) ] .d. g. nel and c. a. van der merwe , http://dx.doi.org/10.1080/03610928608829342[commun .- theory meth .* 15 * , 3719 ( 1986 ) ] .k. sheppard , ( unpublished ) , http://www.kevinsheppard.com/images/e/e2/psdmem_sheppard.pdf b. nosrat - makouei , j. g. andrews , and r. w. heath , http://dx.doi.org/10.1109/tsp.2011.2124458[ieee trans .sig . process .* 59 * , 2783 ( 2011 ) ] .k. conradsen , a. a. nielsen , j. schou , and h. skriver , http://dx.doi.org/10.1109/tgrs.2002.808066[ieee trans .remote sensing * 41 * , 4 ( 2003 ) ] .n. ramakrishnan , e. ertin , and r. l. moses , http://dx.doi.org/10.1109/jstsp.2011.2119291[ieee j. sel .. proces . * 5 * , 665 ( 2011 ) ] .a. y. orlov , http://dx.doi.org/10.1142/s0217751x04020476[int .a * 19 * , 276 ( 2004 ) ] .vinayak and a. pandey , http://dx.doi.org/10.1103/physreve.81.036202[phys .e * 81 * , 036202 ( 2010 ) ] . c. recher , m. kieburg , and t. guhr , http://dx.doi.org/10.1103/physrevlett.105.244101[phys .* 105 * , 244101 ( 2010 ) ] .p. dharmawansa and m. r. mckay , http://dx.doi.org/10.1016/j.jmva.2011.01.004[j .102 * , 847 ( 2011 ) ] . c. recher , m. kieburg , t. guhr , and m. r. zirnbauer , http://dx.doi.org/10.1007/s10955-012-0567-x[j .. phys . * 148 * , 981 ( 2012 ) ] .t. wirtz and t. guhr , http://dx.doi.org/10.1103/physrevlett.111.094101[phys .lett . * 111 * , 094101 ( 2013 ) ] .p. j. forrester , http://dx.doi.org/10.1142/s2010326313500111[random matrices : theory appl .* 02 * , 1350011 ( 2013 ) ] . t. wirtz and t. guhr , http://dx.doi.org/10.1088/1751-8113/47/7/075004[j .a * 47 * , 075004 ( 2014 ) ] .i. g. macdonald , hypergeometric functions i ( handwritten notes ) , 1987 - 1988 , http://arxiv.org/abs/1309.4568[arxiv:1309.4568 ] a. edelman and p. koev , http://dx.doi.org/10.1142/s2010326314500099[random matrices : theory appl .* 03 * , 1450009 ( 2014 ) ]. s. kumar ( unpublished ) .m. l. mehta , _ random matrices _ ( academic press , new york , 2004 ) , 3rd ed .a. borodin , http://dx.doi.org/10.1016/s0550-3213(98)00642-7[nucl .b * 536 * , 704 ( 1998 ) ] .wolfram research inc ., mathematica _ version 9.0 _ , champaign , illinois ( 2013 ) .y. v. fyodorov , http://dx.doi.org/10.1016/s0550-3213(01)00508-9[nucl .b * 621 * , 643 ( 2002 ) ] .d. serre , _ matrices : theory and applications ( springer , 2010 ) , 2nd ed .
the sum of independent wishart matrices , taken from distributions with unequal covariance matrices , plays a crucial role in multivariate statistics , and has applications in the fields of quantitative finance and telecommunication . however , analytical results concerning the corresponding eigenvalue statistics have remained unavailable , even for the sum of two wishart matrices . this can be attributed to the complicated and rotationally - noninvariant nature of the matrix distribution that makes extracting the information about eigenvalues a nontrivial task . using a generalization of the harish - chandra - itzykson - zuber integral , we find exact solution to this problem for the case when one of the covariance matrices is proportional to the identity matrix , while the other is arbitrary . we find exact and compact expressions for the joint probability density and marginal density of eigenvalues . the analytical results are compared with numerical simulations and we find perfect agreement .
because of the abundant availability of mouse and human genome data ( mouse genome sequencing consortium 2002 , international human genome sequencing consortium 2001 ) , it has come to light that mutation rates vary widely across different regions of the human genome ( hardison et al . 2003 , mouse genome sequencing consortium 2002 , matassi et al . 1999 ) , in agreement with a number of smaller - scale studies ( wolfe et al .1989 , perry and ashworth 1999 , casane et al .regions of unusually high or low substitution rates have been observed from 4-fold sites and ancestral repeat sequences , two of the best candidates for measuring neutral rates of mutation in mammals ( hardison et al .2003 , mouse genome sequencing consortium 2002 , sharp et al .the reasons for such regional variability are unclear , since structural characterizations of the mutation rate are nascent .whatever the reason for these hot and cold regions , their existence suggests a question that has intriguing consequences for molecular evolution : does the organism take advantage of these hot and cold spots ? one way to take advantage of a hot region would be to place genes there for which the hotness is useful an intuitive example would be receptor proteins , which must respond to a constantly changing ligand set .similarly , it could be beneficial to place delicate genes in a cold region , to reduce the possibility of deleterious mutations .these potential advantages offer the possibility that regional mutation rates affect the spatial organization of genes .the idea of such organization in mouse and human is bolstered by recent findings of gene organization in yeast .for example , pal and hurst showed that yeast genes are organized to take advantage of local recombination rates ( pal and hurst 2003 ) , which is particularly relevant since mutation rate and recombination rate are known to be correlated ( lercher and hurst 2002 ) .if the local mutation rate equivalent to the synonymous ( amino - acid preserving ) substitution rate if synonymous substitutions are neutral affects gene organization , this would constitute a type of selection complementary to traditional selection on point mutations ( graur and li 2000 ) .we studied whether local mutation rates affect gene locations by measuring the mutation rates of genes and their organization in the human genome .first , we analyzed the substitution rates of the genes in each of the families defined by the gene ontology ( gene ontology consortium 2000 ) .if the organism is taking advantage of varying , gene families should be biased toward regions of appropriate rate .in fact we observe that several functional classes of genes preferentially occur in hot or cold regions .some of the notable hot categories we observe are olfactory genes , cell adhesion genes , and immune response genes , while the cold categories are biased toward regulatory proteins such as those involved in transcription regulation , dna / rna binding , and protein modification .also , to better characterize the hot and cold regions , we measured the length scale over which substitution rates vary . while rough bounds on the size of hot and cold regions are known ( hardison et al .2003 , matassi et al .1999 ) , this paper presents the first quantitative calculation of their length scale .because mutation rates are regional , mutation rates in genes categories could be influenced by events altering the organization of genes in the genome , such as gene relocation or gene duplication .we therefore analyzed mechanisms by which functional categories of genes may have become concentrated in hot or cold regions .a clustering analysis reveals that the hotness of some categories is enhanced by local gene duplications in hot regions .however , there are strong functional similarities among the hot categories both clustered and unclustered , as well as among the cold categories .these functional similarities imply that the instances of duplicated categories are not random .i.e. selection may have affected which genes have duplicated and persisted .recently , substitution rates between _ mus musculus _ and _ homo sapiens _ have been measured by several groups on a genome - wide scale ( mouse genome sequencing consortium 2002 , hardison et al .2003 , kumar and subramanian 2002 ) .these substitution rates vary significantly across the genome ( mouse genome sequencing consortium 2002 , hardison et al .2003 ) , suggesting that neutral mutation rates may have regional biases as well .a popular proxy for neutral mutation rates is the substitution rate at 4-fold sites ( a recent example is ( kumar and subramanian 2002 ) ) , base positions in coding dna which do not affect protein sequence , and which should hence be under less selective pressure than other sites .the 4-fold sites also offer the advantage of being easily alignable . for these reasons , we estimated the neutral mutation rate from substitution rates at 4-fold sites ( which we use interchangeably with the term in this paper ) .this identification is not without complexities , however , since there are processes which can in principle selectively affect the 4-fold sites .for example , some have argued that exogenous factors such as isochore structure influence the silent sites ( bernardi 2000 ) , and codon usage adaptation has been shown to affect silent sites in bacteria and yeast ( sharp and li 1987 , percudani and ottonello 1999 ) .so far , such selective effects have been difficult to detect in mammals ( iida and akashi 2000 , kanaya et al .2001 , duret and mouchiroud 2000 , smith and hurst 1999a ) .recently , hardison et al showed that several functionally unrelated measures of mutation rate , including snp density , substitutions in ancestral repeats , and substitutions in 4fold sites , are correlated in genome - wide mouse - human comparisons ( hardison et al .2003 ) suggesting that these measures have common neutral aspects .we constructed our own dataset of the 4-fold substitution rates for 14790 mouse / human orthologous genes , using data from the ensembl consortium . in order to properly account for stochastic finite - size effects, we mapped the observed substitution rates to a normalized value , based on the assumption that all 4-fold sites mutate at the same rate ( see methods ) . under this assumption , it was expected that the normalized substitution rates would follow the normal distribution ( a gaussian with ) .contrary to these expectations , the distribution of ortholog substitution rates was found to be highly biased toward high or low rates , indicating that 4-fold mutation rates vary substantially by location , and on a scale larger than the typical size of a gene .[ fig : mutdistribution ] shows the distribution of substitution rates for all mouse / human orthologs . the observed distribution has excesses of genes at both high and low substitution rates .these results are in agreement with the findings of matassi et al ( matassi et al . 1999 ) , who reported significant mutation rate correlations between neighboring genes .this is not a compositional effect the distribution remained the same even when corrections for the gene s human base composition were made ( see methods ) .we further verified that substitution rates of neighboring genes were correlated using an analysis qualitatively similar to matassi et al though with approximately 20 times more orthologs finding that gene substitution rates are correlated with their neighbors with a p - value of ( see methods ) .these results imply that substitution rates have regional biases acting both within a gene and over longer length scales .we next considered whether there is a relationship between gene locations and their functions .i.e. whether functional categories of genes have biases for being in regions of particular mutation rate . to test whether such biases exist , we performed an analysis of the gene ontology ( go ) assignments for each ortholog pair ( gene ontology consortium 2000 ) , using data from the ensembl human ensmart database to assign genes to go categories . for each go category, we calculated a -score to measure the overall substitution rate , based on the substitution rates of the genes in the category ( see methods ) .the 21 go categories having statistically significant positive values of are shown in table [ table : gohighsub ] . in terms of 4-fold substitution rates , the hot category rate averages were found to range from 0.346 ( integral to membrane ) to 0.468 ( internalization receptor activity ) , while the genome - wide average was 0.337 ( with gene - wise standard deviation 0.08 ) . for a category with several genes ,the effective standard deviation is much smaller , equal to , where is the number of genes in the category ; so these rate biases are extremely significant .hot gene categories were focused mainly in receptor - type functions , along with a few other categories such as proteolysis and microtubule motor activity .some preferences were partially because categories have genes in common , e.g. 8 genes are shared between the categories dynein atpase activity , dynein complex and microtubule - based movement category .however , there were several categories of similar function which were independent .e.g. membrane and olfactory receptor activity shared no genes , and cell adhesion and immune response shared only 5% of their genes .overall , there was a clear bias for the larger hot categories to contain receptor - type proteins : e.g. receptor activity , olfactory receptor activity , g - protein coupled receptor protein signaling pathway , membrane , and immune response . for the set of all genes where the string `` receptor '' is part of the go description , the average 4-fold substitution rate was found to be 0.347 .the probability for the set of 1488 receptor genes to have a mutation rate this high is . the 36 statistically significant go categories with negative scores , are shown in table [ table : golowsub ] .the 4-fold rate averages for the cold categories ranged from 0.220 ( mrna binding activity ) to 0.326 ( protein serine / threonine kinase activity ) .the coldest gene categories included nuclear proteins , transcription regulation , dna and rna binding , oncogenesis , phosphatases , and kinases , all of which are important to regulatory processes .many of these genes are also housekeeping genes ( hsiao et al .for the set of all genes where the string `` regulat '' is part of the go description , the average 4-fold substitution rate was found to be 0.325 .the probability for the set of 1704 regulation genes to have a mutation rate this high is .we repeated our -score classifications using several other measures of mutation rate and in each case inferred similar hot and cold categories .for example , under the normalized rate model that accounts for human base composition , the same set of 23 hot categories were found . of the 37 cold categories ,33 remained classified as cold .the 4 lost were : regulation of transcription from pol ii promoter , development , neurogenesis , and translation regulator activity .there were 6 new categories , and these were also largely regulatory : nucleic acid binding activity , translation initiation factor activity , ubiquitin c - terminal hydrolase activity , collagen , rna processing , and negative regulation of transcription .we also calculated several maximum likelihood ( ml ) measures of using mutation models in the paml package ( yang 1997 ) , including the nei and gojobori ( nei and gojobori 1986 ) codon - based measure and the tamura - nei ( tn93 ) and rev ( tavere 1986 ) models .we again found qualitatively similar sets of hot and cold categories receptor genes at high substitution rates and regulatory genes at low substitution rates though there were changes in the numbers of significant categories .for example , for the tn93 model , we observed 10 hot categories : induction of apoptosis by extracellular signals , g - protein coupled receptor protein signaling pathway , olfactory receptor activity , receptor activity , apoptosis , enzyme activity , chymotrypsin activity , trypsin activity , integral to membrane , and dynein atpase activity ; and 8 cold categories : calcium - dependent protein serine / threonine phosphatase activity , ribonucleoprotein complex , protein serine / threonine kinase activity , rna binding activity , protein amino acid dephosphorylation , intracellular protein transport , protein transporter activity , and nucleus .the categories inferred from our original -score analysis are probably more accurate than those from ml methods , because ml methods tend to produce strong outliers at high substitution rate , skewing calculations of the variance in the -score analysis . given the existence of hot and cold gene categories , the question then becomes : why do these biases exist ?one potentially non - selective factor that could affect category rate biases is local gene duplications .new genes generally arise by duplication , in which a new copy of a gene is generated nearby to the pre - existing gene by a recombinatorial event such as unequal crossing - over , followed by evolution to a novel , but often related function ( graur and li 2000 ) .such local duplications can cause many genes with similar function to be clustered together . because there are regional biases in mutation rate ( discussed in the section on block structure of the correlation length ), these functionally - related genes will tend to have similar mutation rates .go categories containing these genes will then be biased toward the mutation rate of the region surrounding the genes .we tested the effect of gene duplications on category rates through a clustering analysis ( see methods ) .if gene duplications are not important to category rates , genes in a hot ( cold ) gene category would be expected to be distributed randomly throughout the many hot ( cold ) regions around the genome , i.e. clustering of genes would be weak . however, if gene duplications are relevant , we would expect hot ( cold ) genes of the same category to be tightly clustered since many of these genes would have arisen by local duplications .we therefore studied the location distribution of each of the gene categories and analyzed the significance of its clustering , using the short - range correlation length base pairs ( see the section on block structure ) as a defining length scale .this analysis was similar to that of williams and hurst , who studied clustering of tissue - specific genes ( williams and hurst 2002 ) , though we analyzed a larger number of more narrowly defined gene families .we found that some of the hot gene categories were indeed clustered , but that none of the cold gene categories were .the results of the clustering for the hot and cold categories are displayed in tables [ table : gohighsub ] and [ table : golowsub ] , with the clustering p - values shown via their values .of the 21 statistically significant hot categories , 10 categories had statistically significant clustering ( ) .for example , the olfactory receptor activity category go:0004984 has 223 genes , with a randomly expected number of clustered genes equal to 30.6 .the actual number of clustered genes was found to be 190 , which has a p - value of less than . in the set of 37 cold genego categories , none had statistically significant clustering .the clustering significance is plotted versus the substitution score for all the go categories with at least 5 members in fig.[fig : ratecluster5 ] .there were many categories of hot genes with significant clustering ( ) , but virtually no cold ones .as an example of clustering in the hot gene categories , we considered the olfactory receptors ( go:0004984 ) . it is well - established that olfactory receptors occur in clusters throughout the human genome ( rouquier et al .1998 ) , and we likewise observed the olfactory receptors to be highly clustered in three regions near the head , middle , and tail of chromosome 11 ( fig .[ fig : chr11clusters ] ) .the central is displayed in fig .[ fig : chr11hot ] .this clustering provided evidence that local gene duplications have influenced the high category rate of the olfactory genes .we next attempted to determine if the high olfactory rates are due to a regional bias .the substitution rates of all genes are plotted in fig .[ fig : chr11hot ] , with the olfactory genes in red .as expected , the olfactory genes exhibited an obvious bias for higher substitution rates than other genes .we next calculated the mutation rate of the region as determined from an independent measure , the substitution rates between ancestral repeat sequences ( green curve ) , using data published by hardison et al ( hardison et al .2003 ) ( see methods ) .the repeat sequence mutation rate was notably higher in the regions where the olfactory genes occur , showing that the hotness of the olfactory genes is a regional property , and not specific to the genes .similar clustering and regional hotness were observed for other hot gene categories .we plot the substitution rates of a cluster of homophilic cell adhesion genes ( go:0007156 ) on chromosome 5 in fig . [fig : chr5hot ] , along with the rates of nearby genes and the ancestral repeat sequence substitution rates .the same features observed for the olfactory genes were also present for the cell adhesion genes : clustering , high substitution rates , and an elevated ancestral repeat substitution rate .the repeat substitution rate exhibited a plateau - like behavior over the region defined by the homophilic cell adhesion genes .these factors support the interpretation that significant numbers of hot genes have arisen by duplications in inherently hot regions of the genome .several explanations have been proposed that could account for the regional biases in mutation rate ( mouse genome sequencing consortium 2002 ) , including recombination - associated mutagenesis ( lercher and hurst 2002 , perry and ashworth 1999 ) , strand asymmetry in mutation rates ( francino 1997 ) and inhomogeneous timing of dna replication ( wolfe et al .1989 , gu and li 1994 ) .the structure of regional biases could be considered from the perspective of amino - acid changing substitutions as well , since linked proteins have been known to have similar substitution rates ( williams and hurst 2000 , williams and hurst 2002 ) .however , the silent sites may be easier to comprehend , since protein sequences are more likely to be complicated by non - neutral pressures . to shed light on the structural properties of the hot and cold mutational regions , we measured the length scale over which substitution rates are correlated .previously , correlations have been observed in blocks of particular physical ( megabases ) ( hardison et al .2003 ) or genetic ( centimorgans ) ( matassi et al . 1999 , lercher et al . 2001 ) size . while these studies have focused on whether correlations exist at certain length scales , it is informative to measure the decay of correlations with distance .we therefore measured the length scale of substitution rate correlation , using an analysis of the correlation function ( huang 1987 ) where is the substitution rate of a gene basepairs downstream of a gene with substitution rate , and indicates an average over the available data ( see methods ) .we expect that at small , the correlation function will be positive and then decrease with as rates become decoupled .the length scale over which this decay occurs serves as a measure of the typical size of hot or cold regions .the rate correlation function is plotted in fig .[ fig : rate_correlation ] versus both the human and mouse values for .we observed two notable behaviors : 1 ) a strong correlation which decays over a region of approximately 1 megabase , and 2 ) a longer range correlation which plateaus over a region of approximately 10 megabases . at larger distances , correlations are weaker . for example , the human curve first dips below the threshold at approximately 11 mb and the mouse curve first crosses it at approximately 9 mb .this suggests that there are multiple phenomena which control the mutation rate of regions , both long ( 10 mb ) and short ( 1 mb ) length scale .we also measured the characteristic short - range correlation length using an exponential fit .the correlation length was determined by fitting the data to the functional form where is the correlation at long distances and is the correlation at zero distance . because of the observed plateauing behavior of the data , we performed our curve fit over the region ] and then averaged over each of these bins to determine the correlation function .the data were plentiful enough for the averaged values shown in fig .[ fig : rate_correlation ] to be statistically significant .it was difficult to extend to larger values of since the amount of data decreases with , a fact manifested in the increasing fluctuations at larger in fig .[ fig : rate_correlation ] . for example , the value of the average correlation at megabases in the human data of fig .[ fig : rate_correlation ] was based on only 79 measurements , whereas at it was based on 22860 measurements . for genes with alternative splicings ,only one of the genes was used , in order to avoid spurious effects caused by reuse of dna .orthologous block boundaries were defined by genes at which the chromosome changes in either species .monotonicity and consistent strand orientation were ignored in order to obtain blocks with large values of .most of the data comes from blocks at least several megabases long .approximately 5% is in blocks of size less than base pairs , 55% is in blocks of size between and base pairs , and the remaining 40% is in larger blocks . the long - range correlation shown in fig .[ fig : rate_correlation ] was statistically significant .theoretically , fluctuations in should be of order , where is the number of data samples in a bin . at a distance of 10 megabases , there were samples , corresponding to an uncertainty of .this uncertainty was an order of magnitude smaller than the observed value of .99 alberts b , bray d , lewis j , raff , m , roberts k , et al .( 1994 ) molecular biology of the cell .new york : garland publishing .altschul sf , gish w , miller w , meyers ew , lipman dj ( 1990 ) basic local alignment search tool .j mol biol 215:403 .bernardi g ( 2000 ) isochores and the evolutionary genomics of vertebrates .gene 241:3 .casane d , boissinot s , chang bh - j , shimmin lc , li wh ( 1997 ) mutation pattern among regions of the primate genome .j mol evol 45:216 .castresana j ( 2002 ) genes on human chromosome 19 show extreme divergence from the mouse orthologs and a high gc content .nucl acids res 30:1751 .durbin r , eddy s , krogh a , mitchison g ( 1998 ) biological sequence analysis .cambridge : cambridge university press .duret l , mouchiroud d , gouy m ( 1994 ) hovergen , a database of homologous vertebrate genes .nucleic acids res 22:2360 .duret l , mouchiroud d ( 2000 ) determinants of substitution rates in mammalian genes : expression pattern affects selection intensity but not mutation rate .mol bio evol 17:68 .francino mp , ochman h ( 1997 ) strand asymmetries in dna evolution .trends in genetics 13:240 . gene ontology consortium ( 2000 ) gene ontology : tool for the unification of biology .nature genet . 25 : 25 - 29 .goldman n , yang z ( 1994 ) a codon - based model of nucleotide substitution for protein - coding dna sequences .mol bio evol 11:725 .goldsby r , kindt t , osborne b , kuby j ( 2000 ) immunology .new york : w. h. freeman and co. graur d , li wh ( 2000 ) fundamentals of molecular evolution , 2nd edition .sunderland : sinauer associates .gu x and li wh ( 1994 ) a model for the correlation of mutation rate with gc content and the origin of gc - rich isochores .j mol evol 38:468 . hardison rc , roskin km , yang s , diekhans m , kent wj , et al .( 2003 ) covariation in frequencies of substitution , deletion , transposition , and recombination during eutherian evolution .genome research 13:13 .hsiao ll , dangond f , yoshida t , hong r , jensen rv , et al .( 2001 ) a compendium of gene expression in normal human tissues .physiol genomics 7:97 .huang k ( 1987 ) statistical mechanics .new york : john wiley and sons .international human genome sequencing consortium ( 2001 ) initial sequencing and analysis of the human genome .nature 409:860 .iida k , akashi h ( 2000 ) a test of translational selection at ` silent ' sites in the human genome : base composition comparisons in alternatively spliced genes .gene 261:93 .kanaya s , yamada y , kinouchi m , kudo y , ikemura t ( 2001 ) codon usage and trna genes in eukaryotes : correlation of codon usage diversity with translation efficiency and with cg - dinucleotide usage as assessed by multivariate analysis .j mol evol 53:290 .kumar s , subramanian s ( 2002 ) mutation rates in mammalian genomes .proc natl acad sci 99:803 .lane rp , cutforth t , young j , athanasiou m , friedman c , et al .( 2001 ) genomic analysis of orthologous mouse and human olfactory receptor loci .proc natl acad sci 98:7390 .lercher mj , williams ejb , and hurst ld ( 2001 ) local similarity in evolutionary rates extends over whole chromosomes in human - rodent and mouse - rate comparisons : implications for understanding the mechanistic basis of the male mutation bias .mol biol evol 18:2032 .lercher mj , hurst ld ( 2002 ) human snp variability and mutation rate are higher in regions of high recombination .trends in genetics 18:337 .li wh ( 1993 ) unbiased estimation of the rates of synonymous and nonsynonymous substitution .j mol evol 36:96 .matassi g , sharp pm , gautier c ( 1999 ) chromosomal location effects on gene sequence evolution in mammals .current biology 9:786 .mouchiroud d , gautier c , bernardi g ( 1995 ) frequences of synonymous substitutions in mammals are gene - specific and correlated with frequencies of nonsynonymous substitutions .j mol evol 40:107 . mouse genome sequencing consortium ( 2002 ) initial sequencing and comparative analysis of the mouse genome .nature 420:520 .nei m , gojobori t ( 1986 ) .simple methods for estimating the numbers of synonymous and nonsynonymous nucleotide substitutions .mol bio evol 3:418 - 426 .ohta t , ina y ( 1995 ) .variation in synonymous substitution rates among mammalian genes and the correlation between synonymous and nonsynonymous divergences .j mol evol 41:717 .pal c , hurst ld ( 2003 ) .evidence for co - evolution of gene order and recombination rate .nature genetics 33:392 .papavasiliou fn , schatz dg ( 2002 ) somatic hypermutation of immunoglobulin genes : merging mechanisms for genetic diversity .cell 109:s35 .percudani r , ottonello s ( 1999 ) selection at the wobble position of codons read by the same trna in saccharomyces cerevisiae .mol bio evol 16:1752 .perry j , ashworth a ( 1999 ) evolutionary rate of a gene affected by chromosomal position .current biology 9:987 .rice j ( 1995 ) mathematical statistics and data analysis .belmont : duxbury press .rouquier s , taviaux s , trask bj , brand - arpon v , van den engh g , et al .( 1998 ) distribution of olfactory receptor genes in the human genome .nature genetics 18:243 .sharon d , glusman g , pilpel y , khen m , gruetzner f , et al .( 1999 ) primate evolution of an olfactory receptor cluster : diversification by gene conversion and recent emergence of pseudogenes .genomics 61:24 .sharp pm , averof m , lloyd at , matassi g , and peden jf ( 1995 ) dna sequence evolution : the sounds of silence .phil trans r soc lond b 349:241 .sharp pm , li wh ( 1987 ) the rate of synonymous substitution in enterobacterial genes is inversely related to codon usage bias .mol bio evol 4:222 .smith ngc , hurst ld ( 1999a ) the causes of synonymous rate variation in the rodent genome : can substitution rates be used to estimate the sex bias in mutation rate ?genetics 152:661 .smith ngc , hurst ld ( 1999b ) the effect of tandem substitutions on the correlation between synonymous and nonsynonymous rates in rodents .genetics 153:1395 .tamura k , nei m ( 1993 ) estimation of the number of nucleotide substitutions in the control region of mitochondrial dna in humans and chimpanzees .mol bio evol 10:512 .tasic b , nabholz ce , baldwin kk , kim y , rueckert eh , et al .( 2002 ) promoter choice determines splice site selection in protocadherin alpha and gamma pre - mrna splicing .mol cell 10:21 .tavere s ( 1986 ) some probabilistic and statistical problems on the analysis of dna sequences . lec math life sci 17:57 .uemura t ( 1998 ) the cadherin superfamily at the synapse : more members , more missions .cell 93:1095 .williams ejb and hurst ld ( 2002 ) clustering of tissue - specific genes underlies much of the similarity in rates of protein evolution of linked genes .j mol evol 54:511 .winzeler ea , shoemaker dd , astromoff a , liang h , anderson k , et al .( 1999 ) functional characterization of the saccharomyces cerevisiae genome by gene deletion and parallel analysis .science 285:901 .wolfe kh , sharp pm , li wh ( 1989 ) mutation rates differ among regions of the mammalian genome. nature 337:283 .wu q , zhang t , cheng j - f , kim y , grimwood j , et al . ( 2001 ) comparative dna sequence analysis of mouse and human protocadherin gene clusters .genome research 11:389 .yang z ( 1997 ) paml : a program package for phylogenetic analysis by maximum likelihood .comput appl biosci 13:555 .zhang l and li wh ( 2003 ) mammalian housekeeping genes evolve more slowly than tissue - specific genes .mol bio evol epub : http://mbe.oupjournals.org / cgi / reprint / msh010v1 .this material is based upon work supported by the national science foundation under a grant awarded in 2003 .any opinions , findings , and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the national science foundation .jc would like to thank t. hwa , d. petrov , and c. s. chin for comments on the manuscript .* figure 1 .distribution of normalized substitution rates * histogram of substitution rates based on 14790 orthologous mouse and human genes ( black curve ) .the rate distribution has significantly more genes at high and low rates than the expected normal distribution ( red curve ) .this bias toward high and low rates remains even when rates are corrected for human base composition ( green curve ) .* figure 2 .clustering versus substitution rate for go categories containing at least 5 members * virtually all clustered gene categories have higher than average substitution rates ( ) .* figure 3 .clustering of olfactory genes on human chromosome 11 * the olfactory genes are clustered into three regions along the chromosome .the substitution rates of the olfactory genes are almost all hot , while the non - olfactory genes are distributed around .* figure 4 .olfactory genes lie in a mutational hot spot * substitution rates of the olfactory genes in the central region of human chromosome 11 . the substitution rate of ancestral repeat sequences is higher in the region where the olfactory genes lie .* figure 5 .homophilic cell adhesion genes also lie in a hot spot * substitution rates of a cluster of homophilic cell adhesion genes on human chromosome 5 , along with substitution rates of other genes and ancestral repeat sequences .the repeat sequence substitution rate plateaus at a higher level in this region .* figure 6 .correlation length analysis of substitution rates * correlation of substitution rates in syntenous blocks as a function of distance between genes measured along the human chromosome ( top ) and measured along the mouse chromosome ( bottom ) .there are two length scales of correlation decay : a short one of one megabase and a long one of 10 megabases .the curve fits are for for the region $ ] .* table 1 .statistically significant hot gene ontology categories * listed are the categories with having at least 5 genes and , sorted by statistical significance ( ) .there is a bias toward proteins involved in extra - cellular communication .several of the categories have an unusual number of clustered genes ( ) .* table 2 .statistically significant cold gene ontology categories * listed are the categories with having at least 5 genes and , sorted by statistical significance .there is a bias toward proteins involved in dna , rna , or protein regulation .none of the cold categories have statistically significant clustering .
* background : * the neutral mutation rate is known to vary widely along human chromosomes , leading to mutational hot and cold regions . * methodology / principle findings : * we provide evidence that categories of functionally - related genes reside preferentially in mutationally hot or cold regions , the size of which we have measured . genes in hot regions are biased toward extra - cellular communication ( surface receptors , cell adhesion , immune response , etc . ) while those in cold regions are biased toward essential cellular processes ( gene regulation , rna processing , protein modification , etc . ) . from a selective perspective , this organization of genes could minimize the mutational load on genes that need to be conserved and allow fast evolution for genes that must frequently adapt . we also analyze the effect of gene duplication and chromosomal recombination , which contribute significantly to these biases for certain categories of hot genes . * conclusions / significance : * overall , our results show that genes are located non - randomly with respect to hot and cold regions , offering the possibility that selection acts at the level of gene location in the human genome .
let be i.i.d . observations , where and the s and s are independent .assume that the s are unobservable and that they have the density and also that the s have a known density the deconvolution problem consists in estimation of the density based on the sample a popular estimator of is the deconvolution kernel density estimator , which is constructed via fourier inversion and kernel smoothing .let be a kernel function and a bandwidth .the kernel deconvolution density estimator is defined as where denotes the empirical characteristic function of the sample , i.e. and are fourier transforms of the functions and respectively , and the estimator was proposed in and and there is a vast amount of literature dedicated to it ( for additional bibliographic information see e.g. and ) .depending on the rate of decay of the characteristic function at plus and minus infinity , deconvolution problems are usually divided into two groups , ordinary smooth deconvolution problems and supersmooth deconvolution problems . in the first caseit is assumed that decays algebraically and in the second case the decay is essentially exponential .this rate of decay , and consequently the smoothness of the density has a decisive influence on the performance of .the general picture that one sees is that smoother is , the harder the estimation of becomes , see e.g. .asymptotic normality of in the ordinary smooth case was established in , see also .the limit behaviour in this case is essentially the same as that of a kernel estimator of a higher order derivative of a density .this is obvious in certain relatively simple cases where the estimator is actually equal to the sum of derivatives of a kernel density estimator , cf . .our main interest , however , lies in asymptotic normality of in the supersmooth case . in this case under certain conditions on the kernel and the unknown density the following theorem was proved in .[ thmanfan ] let be defined by .then )\convd { \mathcal n}(0,1)\ ] ] as here either or is the sample variance of with the asymptotic variance of itself does not follow from this result . on the other hand ,see also , derived a central limit theorem for where the normalisation is deterministic and the asymptotic variance is given . for the purposes of the present work it is sufficient to use the result of .however , before recalling the corresponding theorem , we first formulate conditions on the kernel and the density [ condw ] let be real - valued , symmetric and have support . ] in this case and another kernel satisfying condition [ condw ] is its corresponding fourier transform is given by }(t). ] then , as and )\convd { \mathcal n}\left(0,\frac{a^2}{2\pi^2}\left(\frac{\mu}{\lambda}\right)^{2 + 2\alpha}(\gamma(\alpha+1))^2\right).\ ] ] here denotes the gamma function .the goal of the present note is to compare the theoretical behaviour of the estimator predicted by theorem [ thman ] to its behaviour in practice , which will be done via a limited simulation study .the obtained results can be used to compare theorem [ thmanfan ] to theorem [ thman ] , e.g.whether it is preferable to use the sample standard deviation in the construction of pointwise confidence intervals ( computation of is more involved ) or to use the normalisation of theorem [ thman ] ( this involves evaluation of a simpler expression ) .the rest of the paper is organised as follows : in section [ simulations ] we present some simulation results , while in section [ conclusions ] we discuss the obtained results and draw conclusions .all the simulations in this section were done in mathematica .we considered three target densities .these densities are : 1 . density # 1 : 2 .density # 2 : 3 .density # 3 : the density # 2 was chosen because it is skewed , while the density # 3 was selected because it has two unequal modes .we also assumed that the noise term was distributed . notice that the noise - to - signal ratio /\var[y ] 100\% ] on that grid .notice that it is easier to evaluate by rewriting it in terms of the characteristic functions , which can be done via parseval s identity , cf . . for real dataof course the above method does not work , because depends on the unknown we refer to for data - dependent bandwidth selection methods in kernel deconvolution .following the recommendation of , in order to avoid possible numerical issues , the fast fourier transform was used to evaluate the estimate .several outcomes for two sample sizes , and are given in figure [ fig1 ] .we see that the fit in general is quite reasonable .this is in line with results in , where it was shown by finite sample calculations that the deconvolution kernel density estimator performs well even in the supersmooth noise distribution case , if the noise level is not too high .( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm in figure [ fig2 ] we provide histograms of estimates that we obtained from our simulations for and ( the densities # 1 and # 2 ) and for and ( the density # 3 ) . for the density# 1 points and were selected because the first corresponds to its mode , while the second comes from the region where the value of the density is moderately high .notice that is a boundary point for the support of density # 2 and that the derivative of density # 2 is infinite there . for the density # 3the point corresponds to the region between its two modes , while is close to where it has one of its modes .the histograms look satisfactory and indicate that the asymptotic normality is not an issue .( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm ( 5.5,4.0 ) = 5.5 cm our main interest , however , is in comparison of the sample standard deviation of at a fixed point to the theoretical standard deviation computed using theorem [ thman ] .this is of practical importance e.g. for construction of confidence intervals .the theoretical standard deviation can be evaluated as upon noticing that in our case , i.e. when using kernel and the error distribution we have after comparing this theoretical value to the sample standard deviation of the estimator at points and ( the densities # 1 and # 2 ) and at points and ( the density # 3 ) , see table [ table1 ] , we notice a considerable discrepancy ( by a factor for the density # 1 and even larger discrepancy for densities # 2 and # 3 ) . at the same time the sample meansevaluated at these two points are close to the true values of the target density and broadly correspond to the expected theoretical value note here that the bias of is equal to the bias of an ordinary kernel density estimator based on a sample from see e.g. ..[table1 ] sample means and and sample standard deviations and evaluated at and ( densities # 1 and # 2 ) and and ( the density # 3 ) together with the theoretical standard deviation and the corrected theoretical standard deviation .the bandwidth is given by [ cols="^,^,^,^,^,^,^,^",options="header " , ] finally , we mention that results qualitatively similar to the ones presented in this section were obtained for the kernel as well .these are not reported here because of space restrictions .in the simulation examples considered in section [ simulations ] for theorem [ thman ] , we notice that the corrected theoretical asymptotic standard deviation is always considerably larger than the sample standard deviation given the fact that the noise level is not high .we conjecture , that this might be true for the densities other than # 1 , # 2 and # 3 as well in case when the noise level is low .this possibly is one more explanation of the fact of a reasonably good performance of deconvolution kernel density estimators in the supersmooth error case for relatively small sample sizes which was noted in . on the other handthe match between the sample standard deviation and the corrected theoretical standard deviation is much better for higher levels of noise .these observations suggest studying the asymptotic distribution of the deconvolution kernel density estimator under the assumption as cf. , where denotes the standard deviation of the noise term .our simulation examples suggest that the asymptotic standard deviation evaluated via theorem [ thman ] in general will not lead to an accurate approximation of the sample standard deviation , unless the bandwidth is small enough , which implies that the corresponding sample size must be rather large .the latter is hardly ever the case in practice .on the other hand , we have seen that in certain cases this poor approximation can be improved by using the left - hand side of instead of the right - hand side .a perfect match is impossible to obtain given that we still neglect the remainder term in .however , even after the correction step , the corrected theoretical standard deviation still differs from the sample standard deviation considerably for small sample sizes and lower levels of noise .moreover , in some cases the corrected theoretical standard deviation is even farther from the sample standard deviation than the original uncorrected version .the latter fact can be explained as follows : 1 .it seems that both the theoretical and corrected theoretical standard deviation overestimate the sample standard deviation .the value of the bandwidth for which the match between the corrected theoretical standard deviation and the sample standard deviation become worse , belongs to the range where the corrected theoretical standard deviation is larger than the theoretical standard deviation . in view of item 1 above , it is not surprising that in this case the theoretical value turns out to be closer to the sample standard deviation than the corrected theoretical value .the consequence of the above observations is that a naive attempt to directly use theorem [ thman ] , e.g. in the construction of pointwise confidence intervals , will lead to largely inaccurate results .an indication of how large the contribution of the remainder term in can be can be obtained only after a thorough simulation study for various distributions and sample sizes , a goal which is not pursued in the present note . from the three simulation examples that we considered, it appears that the contribution of the remainder term in is quite noticeable for small sample sizes .for now we would advise to use theorem [ thman ] for small sample sizes and lower noise levels with caution .it seems that the similar cautious approach is needed in case of theorem [ thmanfan ] as well , at least for some values of unlike for the ordinary smooth case , see , there is no study dealing with the construction of uniform confidence intervals in the supersmooth case . in the latter paper a better performance of the bootstrap confidence intervalswas demonstrated in the ordinary smooth case compared to the asymptotic confidence bands obtained from the expression for the asymptotic variance in the central limit theorem .the main difficulty in the supersmooth case is that the asymptotic distribution of the supremum distance between the estimator and the true density is unknown .our simulation results seem to indicate that the bootstrap approach is more promising for the construction of pointwise confidence intervals than e.g. the direct use of theorems [ thmanfan ] or [ thman ] .moreover , the simulations suggest that at least theorem [ thman ] is not appropriate when the noise level is low .
via a simulation study we compare the finite sample performance of the deconvolution kernel density estimator in the supersmooth deconvolution problem to its asymptotic behaviour predicted by two asymptotic normality theorems . our results indicate that for lower noise levels and moderate sample sizes the match between the asymptotic theory and the finite sample performance of the estimator is not satisfactory . on the other hand we show that the two approaches produce reasonably close results for higher noise levels . these observations in turn provide additional motivation for the study of deconvolution problems under the assumption that the error term variance as the sample size + _ keywords : _ finite sample behavior , asymptotic normality , deconvolution kernel density estimator , fast fourier transform . + _ ams subject classification : _ 62g07 +
at the heart of many astronomical studies today is the basic step of catalog merging ; combining measurements from different time intervals , wavelengths , and potentially separate instruments and telescopes .scientific analyses exploit these multicolor cross - matches to understand the temporal and photometric nature of the underlying objects . in doingso they rely implicitly on the quality of the associations , thus the cross - identification of sources is arguably one of the most important steps in measuring the properties of celestial objects . in general , cross - matching catalogsis a difficult problem that can not really be separated from the scientific question at hand .an example of this is apparent when we consider the case of stellar observations .stars that move between observations , due to their proper motions , are difficult to merge into multicolor sources ( even within a single survey ) . yet without the multicolor information it might not be possible to classify the source as a star in the first place . with a new generation of surveys that will take large quantities of multicolor photometry covering the galactic plane and observed over a period of several years ( e.g. the panoramic survey telescope & rapid response system , panstarrs , and the large synoptic survey telescope , lsst ) it is clear that addressing these issues is becoming a serious concern . in the recent work of a general probabilistic formalismwas introduced that is extendable to arbitrarily complex models .the beauty of the approach of bayesian hypothesis testing is that it clearly separates the contributions of different types of measurements , e.g. , the position on the sky or the colors of the sources , yet , naturally combines them into a coherent method .it is a generic framework that provides the prescription for the calculations that can be refined with more and more sophisticated modeling . in this paper , we go beyond the simple case of stationary objects , and study the cross - identification of point sources that move on the sky .most importantly we focus on stars that can be significantly offset between the epochs of observations .although we only have loose constraints on their proper motions in general , this prior knowledge is enough to revise our static models , and work out the bayesian evidence of the matches . in section [ sec : bf ]we introduce a class of models that allow for changes in the position over time .section [ sec : prior ] deals with the a priori constraints on the proper motions of the stars and their empirical ensemble statistics . in section [ sec : res ] we show the improvements over the static model on actual observations of stars , and section [ sec : sum ] concludes our study . throughout this paper ,we adopt the convention to use the capital symbol for probabilities and the lower case letter for probability densities .conceptually , modeling the position of moving sources is straightforward .the description combines the motion and the uncertainty of the astrometric measurements .the first question to answer is where on the sky one should expect to see an object of a certain proper motion , if it had been in some known position at a given time .next , we calculate the evidence that given detections are truly observations of the same object . the positional accuracy is characterized by a probability density function ( hereafter pdf ) on the celestial sphere . in a given model ,this function tells us where to expect detections of an object that is at its true location . throughout this paper, we use 3-dimensional unit vectors for the positions on the sky , e.g. , the aforementioned and quantities . usually the pdf is a very sharp peak and is assumed to be a normal distribution with some angular accuracy .the correct generalization to directional measurements is the distribution , whose shape parameter is essentially in the limit of large concentration ; see details in .the added complication comes from the fact that some objects are not stationary .if a given star is at location now and has proper motion then time later , it would be at some other position that is offset by a small displacement along a great circle . by substituting this position into our astrometric model, we create a new one with the added proper motion , , and time difference , , parameters . naturally , there is nothing specific in this about the chosen characterization of the astrometry ; one can use any appropriate pdf in place of the fisher distribution instead . at the heart of the probabilistic cross - identificationis the bayes factor used for hypothesis testing .the question we are asking is whether our data , a set of detected sources in separate catalogs with positions , are truly from the same object .for every catalog , we know its epoch and its astrometry characterized by a known pdf .let denote the hypothesis that assumes that all measured positions are observations of the same object , and let denote its complement , i.e. , any one or more of the detections might belong to a separate object . by definition ,the bayes factor is the ratio of the likelihoods of the two hypotheses we wish to compare , that are calculated as the integrals over their entire parameter spaces .if we assume that there is a single object behind the observations , we can integrate over its unknown proper motion and position to calculate where the joint likelihood of given the data is written as the product of the independent components and is the prior on the parameters , which is the subject of the following section .the actual calculation of this likelihood depends on the prior and might only be accessible via numerical methods .the complementary hypothesis is more complicated in the sense that the model has a set of independent objects with parameters , however , the result of the calculation turns out to be much simpler . herethe integral separates into the product of for each integral , we can select a reference time such that , hence the effect of the proper motion drops out , and we arrive at the same result as the stationary case discussed by .the proper motion really only shows up in the numerator of the bayes factor for assessing the quality of the association . the model is well - defined but the integration domain is set by the joint prior that is yet to be determined . in general, the prior can be very complicated for its dependence on the properties of the star .simply put , brighter sources are likely to be closer , and hence , have a larger proper motion .more complicated is the effect of the color that is ( along with its magnitude ) a proxy for placing the star in different stellar populations with different dynamics . in this paper, we will not discuss these effects that will be a topic of future work .we also note that the prior can be a function of the time difference , , to account for cases when the star travels far between observations to a new location with different source density .however , we expect this to be a small effect because the typical speed of stars and the usual time differences between observations today yield small displacements on the sky. using the basic properties of conditional densities , we can write it as the product where the first term is the prior on the position , e.g. , the all - sky prior written with dirac s symbol as and the more complicated second term describes the possible proper motions as a function of location and optionally other properties .the simplest possible model , after the stationary case , is to assume a uniform prior on up to some limit independent of the location , i.e. , we will use this simple prior for comparison in addition to the stationary case , where is assumed to be negligible . .to derive a more realistic prior , we choose to study the ensemble statistics of stars instead of approaching the problem with an analytic model . while the latter would have the advantage of providing a function at an arbitrary resolution , the formulas are difficult to derive and the analytic approximations might miss subtle details of the relation that could be relevant .we study the properties of stars in the sloan digital sky survey catalog archive that also contains accurate proper motion measurements from the recalibrated united states naval observatory b1.0 catalog ( usno - b ; * ? ? ?* ) . for this analysis, we pick stars from the stripe 82 data set where multiple observations are available over 300 square degrees strip covering a narrow range in declination between .these repeated observations were taken between june and december each year from 1998 to 2005 . after rejecting saturated and faint sources ( u should be in the range of 1523.5 and g , r , i , z in the range of 14.524 ) , the number of stars is around 100,000 .this size does not allow for a high - resolution determination of the prior , hence we analyze additional simulation data . to extend the number of stars used in constructing the prior, we used the current state - of - the - art besanon models that match the sdss distributions well . assuming four different stellar populations in the milky way , using the poisson equation and collisionless boltzmann equation with a set of observational parameters ( i.e. fitting parameters to the dynamical rotation curve ) , they compute the number of stars of a given age , type , effective temperature and absolute magnitude , at any place in the galaxy .the model has been successfully used for predictions of kinematics and comparison with observational data in studies , e.g. , , , , , .a total of stars were generated from the besanon models using large - field equatorial coordinates , thus the prior is dominated by model data and not by sdss measurements .the proper motion distribution of sdss data and that of the model are very consistent , with the model yielding somewhat wider distributions than the observations . in preparation for binning the data ,we separate the dependence of the prior on the different proper motion components , , and , omitting the explicit hypothesis , we write since the stripe 82 data set in sdss contains sources only in a narrow declination range between and , we can safely neglect the dependence on declination in equation ( [ pm1 ] ) for the purpose of this study , thus we establish these relations one - by - one starting with the former using the basic property of conditional densities to achieve a better signal - to - noise behaviour across the entire parameter space , we do not use a uniform grid but vary bin sizes so that they follow the asinh ( ) in both parameters . in this way one can have higher resolution bins where more data are available ( around the peak close to 0 ) and wider bins in the tail .we further improve the quality of the empirical prior by removing high - frequency noise with a convolution filter , whose characteristic width is approximately one pixel in size at any location .the integral in the denominator is evaluated by counting the stars in the appropriate bins using the widths of the bins to weight the counts .figure [ prior1 ] shows the prior using the aforementioned non - linear scale for as function of the position .the distribution is centered on approximately , and the location of the mode is practically independent of the r.a . as one nears to the direction of the galactic plane ( and are the nearest regions in this stripe ) the distribution becomes sharper .this is to be expected as , if we included a broader range in right ascension , the pdf would get even narrower as the velocity dispersion of stars decreases . the second term of equation ( [ pm2 ] ) is constructed similarly using the same adaptive binning and smoothing but in even higher dimensions .it is difficult to visualize a 4-dimensional pdf , hence , in figure [ prior2 ] , we plot slices of the prior at various values .both axes are shown in the transformed scale .the values and represent the two edges of stripe 82 , which are closest parts to the galactic plane .the same effect can be seen on these panels as on figure [ prior1 ] , looking out of the plane the pdf gets more disperse .the boxy ( squared ) shape of the contour lines arises from the asinh ( ) transformation ; on a linear system , the contours would appear to be more circular .as mentioned earlier , stripe 82 was observed repeatedly from 1998 to 2005 , between june and december of each year .thus we can obtain multi - epoch observations to test our method .we choose a range of stars with different proper motions observed at different epochs . to be sure that for our tests the observed stars are the same in each epoch we select bright stars ( magnitude ) with tolerances in all magnitudes ( typically 0.7 in u , 0.4 in g , r , i and 0.5 in z ) .the query gives us on the average 20 epochs per star , from which many were observed with small time intervals while the biggest time interval between the epochs is approximately 6.5 years .we divide the time interval into 3 approximately equal parts and thus get 4 observations of each star with much the same time intervals between them .according to equations ( [ int1 ] ) and ( [ int2 ] ) , only ra and dec coordinates are used for calculating the bayes factors , usno measurements of proper motion on the forthcoming figures and tables are only shown as a reference .we randomly select a dozen stars for the following tests .we calculate the integrals of the bayes factor numerically .our monte - carlo implementation generates independent random positions ( 3-d unit vectors ) and two random components of the velocity that yield the vector in the tangent plane of each , i.e. , in theory , one has to integrate the position over the whole celestial sphere , and the proper motion out to infinity , but the integrands always drop sharply in practice , hence one can bound all the relevant parameters easily to reduce the computational need and to use the above approximation to the motion . for more efficient implementations , one can utilize more sophisticated markov chain monte carlo ( mcmc ) methods .the uncertainty estimates include two separate sources of errors .the numerical imprecision is tuned by the number of generated random parameters , and can be estimated in the process of the integration . in our calculations , this error term is kept at a low level , and contributes order of magnitude to the value of the weight of evidence .another source of error comes from the uncertainty in position measurements .while this is small for the sdss detections , , the short time differences could yield large relative errors in the proper motion . to get the order of this error, we generate 100 random realizations of the position of every star from the appropriate gaussian distribution , and recalculate the bayes factor to derive the root - mean - square error in the weight of evidence .the figures later in this section contain error bars that represent these 1 deviations .after the static case , the simplest is a uniform prior as introduced in equation ( [ eq : uconst ] ) .this analytic formula may appear at first not to favor any particular proper motion , yet the bayes factor has some non - trivial scaling properties that are worth considering . as the displacement of the source is a product of the time difference and the proper motion , associations at the same distances but with varying time intervalswill indeed have different bayes factors . in the case of a longer , only smaller proper motions will contribute to the integral in equation ( [ int1 ] ) shrinking the integral domain .this yields a scaling by a factor of , as seen in the left panel of figure [ uniform ] .this means that associations will be assigned lower qualities if they are farther in time even if their angular separations are identical .another interesting aspect is the selection of the limiting value .our choice of 600 mas / yr is admittedly somewhat arbitrary and was selected to cover the stars in our sample .if one decreased its value then stars moving at faster speeds would quickly get lower bayes factors and clearly not be associated .increasing limits make the value of the constant prior drop , which in turn will lower the quality of the associations . for illustrating this effect we compute the bayes factors for a star with and years as a function of ,see the right panel of figure [ uniform ] . as the prior is proportional to , the curve follows the same trend .first we analyze the quality of the associations as a function time difference between the observations .the top panels of figure [ 2obs ] show the logarithm of the bayes factor , a.k.a . the weight of evidence for all stars in our test sample as a function of their proper motion .open circles represent the results from the static model that can be obtained analytically as in , and crosses show the new measurements from the numerical integration of the improved model using the empirical prior introduced in this study .triangles signal the value for a simple model of constant proper motion prior with .if we correct for small relative differences in time intervals between the epochs taking 2 , 4.5 and 6.5 years as a reference respectively , this prior yields practically constant weights of evidence . for reference ,the threshold is plotted as the dashed horizontal line .this is the theoretical dividing line above which the observations support the hypothesis of the match .all panels contain the same objects but the calculations are based on different detections that are farther apart in time as we go from left to right .what we see immediately is that as the time difference increases , the models provide increasingly different results : the static model starts rejecting stars with larger proper motions much faster than models that accommodates the possibility of the sources moving .while the only objective measure of the quality for the match is the bayes factor , its interpretation for the uninitiated is admittedly not as obvious at first as a probability value would be , where one has a good sense of the meaning of the values . from the bayes factor , we can calculate the posterior , if we have a prior via the equation ^{-1}\ ] ] assuming a constant prior over the sky with the value of , where is the estimated total number of stars on the sky as computed from the average density in sdss , we can plot the matching probabilities for comparison .note that large posterior probabilities are not sensitive to small modulations in the density ; it changes linearly only for small values of the bayes factor .the bottom panels of figure [ 2obs ] use the same symbols as the top ones to illustrate the derived posteriors using the above constant prior .the difference between the models is possibly even more striking here : while the left panel has very similar estimates from the different models , with time the separations grow large enough to quickly zero out the probabilities for stars with proper motions larger than 100 mas / yr , whereas the new models keep the probabilities significantly larger .table [ tab : dt ] contains the measurements for all stars as function of the time difference .the first column of the table is the identifier of the star , the objid in sdss data release 6 .the reference proper motion values are taken from ` propermotions ` table of the sdss catalog science archive , which combine the sdss and the recalibrated usno - b astrometry for a precise and reliable determination .we see two important features of the associations using the new proper motion priors .the constant prior yields lower and lower bayes factors as the elapsed time increases and the probability drops regardless of the proper motion . even when the separations are small enough for the static model to perfectly recover the object , the constant proper motion prior yields a lower 90% probability .the empirical prior always outperforms the static model but , in this 2-epoch observation case , the probabilities of the fast stars fall below the constant case or any reasonable probability threshold .next we turn our attention to the potential improvements from including additional epochs to the data sets . for this comparison , we keep the first and last observations in time , hence the baseline is the same for all cases .we add to these two observations additional measurements whose epochs are between them in time .these 2- , 3- and 4-way associations are shown in the left , middle and right panels of figure [ long_obs ] , respectively .it is apparent that adding new detections significantly improves the the proper motion models : the reasonably good associations of only two detections are promoted to essentially certain matches by including intermediate detections . in contrast , the static model continues to reject the associations of all high proper motion stars .we see that one of the stars with actually gets a high probability even in the static model , when the angular separation for only a few years between the epochs is small enough to recover the star .table [ tab : n ] shows the measurements as a function of the number of epochs used in the calculations .we see that the empirical prior of the improved model assigns 100% probability of all stars when considering all 4 epochs , and even the 3-epoch computations would yield close to that with the star getting a lower 97% .the exception from this is the fastest star at in case of the empirical prior , whose probability is essentially 0 in all panels .the reason for this is that this star is one of the highest proper motion stars in stripe 82 and even with the generated model stars , which appear in the prior , we have very few ( roughly 40 ) high proper motion stars .it is worth re - iterating the reason for and the consequence of these results .associations of more than two detections benefit dramatically more from the proper motion prior because two points can always be connected with a straight line unlike three or more . in other words ,the prior probability of two detections being on a great circle is 100% but for three or more it is small , hence such combinations will get boosted by the alignment . having seen the convincingly large probabilities for the 3-way cases and assuming the same maximum time difference between observations , one can conclude that , for the time intervals we consider here , surveying strategies with 3 epochs are superior to those with only 2 , but adding more would not improve noticeably our ability to correctly cross - identify the detections .we presented an improved model for probabilistic cross - identification of stars , which accommodates the possibility of moving objects via a proper motion prior . using the bayesian approach of , we performed hypothesis testing with the new models on a sample of sdss dr6 stars with known proper motions and compared the results to the static case . in accord with our expectations , we found that moving stars would be missed by association algorithms that neglect to model the motion , but using an empirical prior of the proper motion would assign larger observational evidence to the match and higher probabilities .the dependence of the quality of these cross - identifications was studied as a function of separation in time ( and space ) as well as using multi - epoch observations .the sdss stripe 82 sample provided a good test set with 24 detections at different times with a few years in between .the tests were done assuming a maximum proper motion of 600 mas / yr . we found that , even though the 2-epoch data sets benefit significantly from the proper motion model , the 3-epoch observations essentially recover the right associations even for fast - moving stars , and the 4-epoch cases yield 100% probabilities . we also conclude , that the empirical prior surpasses the static model for the whole range of proper motions , while the uniform prior performs better only for the high proper - motion stars . since the analytically computable static case is still a good model for most celestial sources , it is best to carry out the cross - identification in multiple steps : first finding associations using the static model , and then applying the more computer - intensive proper motion variant only to the remainder of sources . while it might be tempting to simply increase the positional errors to discover the associations of moving sources, the procedure would be far from optimal .the overall dominant effect of such changes is that the bayes factor would drop slower with separation , and , since the angular distance is essentially divided by the uncertainty , a ten times larger would practically yield associations out to ten times larger distances ; most of them incidental .the improvement of our novel approach over such naive workarounds comes from using the true uncertainties and the high sensitivity of the algorithm to sources moving on a great circle as allowed by the proper motion model .the authors would like to acknowledge the use of the online tools of the besanon collaboration to obtain simulated stars for this study and thank rosemary wyse for her invaluable insights and help with stellar model of the galaxy . t.b .acknowledges support from the gordon and betty moore foundation via gbmf 554 .g.k . and i.c .acknowledge support from nkth : polanyi , kckha005 and otka - mb08a-80177 .a.c . acknowledges partial support from nsf award ast-0709394 .rapaport , m. , le campion , j .- f ., soubiran , c. , daigne , g. , pri , j .-bosq , f. , colin , j. , desbats , j .-m . , ducourant , c. , mazurier , j .-m . , montignac , g. , ralite , n. , rquime , y.,viateau , b. 2001 , a&a , 376,325
the cross - identification of sources in separate catalogs is one of the most basic tasks in observational astronomy . it is , however , surprisingly difficult and generally ill - defined . recently formulated the problem in the realm of probability theory , and laid down the statistical foundations of an extensible methodology . in this paper , we apply their bayesian approach to stars with detectable proper motion , and show how to associate their observations . we study models on a sample of stars in the sloan digital sky survey , which allow for an unknown proper motion per object , and demonstrate the improvements over the analytic static model . our models and conclusions are directly applicable to upcoming surveys such as panstarrs , the dark energy survey , sky mapper , and the lsst , whose data sets will contain hundreds of millions of stars observed multiple times over several years .
much effort has been invested in improving the stability of atomic clocks , which were first demonstrated more than 50 years ago .the stability of a clock , denoted by ( where is a fractional frequency ) , is expressed as where is a constant of the order unity that depends on the spectrum shape , is the quality factor , is the total number of measurements , is the cycle time and denote the total measurement time .snr is the signal - to - noise ratio of a single measurement of an atomic population ratio .the measurement noise in snr may comprise in the technical error of detection , quantum projection , fluctuating local oscillator ( lo ) frequency , fluctuating atomic frequency shifts , and loss of atomic coherence .the technical noise is a combination of many noises that is related to detection of the signal , including laser power and frequency fluctuation , photon shot noise , atom number fluctuations , and electronics noise .we note that we mainly discuss about the technical noise , quantum projection noise(qpn ) and lo noise , assuming that all other noises are sufficiently smaller in this paper .the description of clock stability expressed in eq .improves the clock stability with scaling . when the measurement noise is dominated by technical noise or qpn , the stability line expressed as eq .is hereafter referred to as the `` technical noise limit , '' or `` qpn limit , '' respectively .illustrates the stability of the technical limit and qpn limit , assuming technical limit is larger than qpn limit . ) and eq.([eq : sigmaapl ] ) , respectively . here , we assume tn limit is worse than qpn limit .a ) lo noise exceeds the technical noise . free - running lo noise is larger than the tn limit at .starting from the free - running noise , decreases at rate until it reaches the snr limit ( blue dotted line ) .apl follows the same slope but is not limited by the snr limit .b ) technical noise exceeds the lo noise .since the dick limit is negligible in this case , the apl alone is responsible for the dependence.,width=377 ] from eq ., we observe that stability can be improved by 1 ) increasing the q of the resonance by increasing the probing time or carrier frequency , 2 ) increasing the snr , and 3 ) averaging over many measurements ( i.e. , increasing the ) .past improvements have focused on strategies 1 ) and 2 ) .for example , the q of the optical ion clock developed by bergquist et al . was several orders of magnitude higher than that of microwave clocks .katori et al . demonstrated an optical lattice clock that simultaneously increases the q and snr . squeezed or entangled states were proposed to improve the technical noise , reducing the technical noise to below the qpn limit .we need to be careful that eq .( [ eq : sigma1 ] ) does not account for the effect of the dead time , known as the dick effect .the dead time is the time expended in any process other than probing the atom with microwaves .the stability is limited by the dick effect when the frequency noise of the lo is large compared to the technical noise and the dead time is significant . in this case , reducing the dead time reduces the by until the snr limit line is reached ( blue broken line in ( a ) ) .this idea has been recently demonstrated . to accelerate the averaging rate of strategy 3 ), we have proposed an atomic phase lock ( apl) . principally , the atomic phase lock lowers the stability at fastest rate , by genuinely monitoring the phase of the atom .provided that the atomic phase remains coherent and is monitored as such , the stability of the atomic clock should improve by .however , the stability of atomic clocks normally improves by even when employing the ramsey sequence , which measures the atomic phase .this trend occurs because the atomic phase is destroyed ( and initialized ) after each projection measurement cycle .if the atoms could maintain its phase coherence over many measurement cycles , the stability would reduce much more rapidly ( as ) , and the stability could be expressed as where is the frequency stability under maintenance of the atomic phase coherence .\a ) shows the typical stability of an atomic clock that is limited by lo noise . in the presence of significant dead time, the stability is limited by the dick limit .as already mentioned in the previous paragraph , if the dead time is sufficiently small , then the stability reduces as until it reaches the snr limit .this means that when limited by lo noise ( a ) ) , it is not an ideal situation to demonstrate the apl , because observing the could just mean the dead time is small enough . in order to avoid this ambiguity ,we have decided to stay in the technical noise limit , where technical noise is larger than lo noise ( b ) ) , for demonstration of apl . in this case , the observation of the dependency only comes from the effect of apl and not from elimination of dick effect .we developed a method that projects only a part of atoms , in order to maintain the coherence of an atomic phase for multiple cycles , and we call this method partial projection measurement ( ppm ) . in principle , the stability ends up the same value whether you perform a projection measurement part by part or all at once . in other words , projecting part of atoms at a time maintains the coherence of phase for up to cycles and reduces the stability at fastest rate by for the same cycles , but snr is reduced by a factor in return .therefore , apl using ppm results in the same stability as the conventional method where a projection measurement is performed once for all the atoms . in this sense, this paper demonstrates a proof - of - principle experiment of apl , but not the actual improvement of the overall clock performance .we note , however , that apl opens a way to overcome the technical noise limit when an atomic clock s snr is limited by technical noise .for example , if we trap one million atoms but we can project only 10000 atoms at one time , the stability is limited by snr of 100 which is a factor 10 worse than qpn limit for one million atoms .if we introduce apl using ppm , we maintain the phase lock of lo to atoms for up to 100 cycles of partial projections , and the stability approaches qpn limit for the one million atoms .the present paper experimentally demonstrates the apl .section 2 describes our experimental setup .we used a single ensemble of ions , and performed projection measurements on only a portion of the ions at a time ( section 3 ) .section 4 shows the results of three phase measurements obtained by the proposed method without resetting the atomic states .the stability is reduced by instead of and the long term stability line is lowered by a factor of as expected .section 5 discuss the application of the method and suggests ideas for further improvements .this section describes our experimental setup . shows our linear radio frequency ( rf )quadrupole trap , in which four cylindrical rods ( diameter= 2 mm ) are placed at the corner of a square separated by 4 mm ( center to center ) and the dc bias plates are spaced by 30 mm .sinusoidal voltage of 10.15 mhz with amplitude 330 v and constant 50 v are applied on rf rods and dc bias plates , respectively .ytterbium ions of 171 isotope ( ) are selectively trapped by a 399 nm photo - ionization laser ( not shown in ) and a 370 nm cooling laser .unintentionally , ions were trapped in two locations , delineated by red ovals in .each cloud ( 3 mm length ) contains about 2000 ions .finite element method simulations revealed that our rf rods are much more closely spaced compared to the distance between two bias plates . at such small separation ,the dc potential near the trap center is shielded by the rods .consequently , a small potential barrier ( whose origin is unclear ) remains in the center even with 50 v dc potential on the bias plates .one of the clouds was used for clock measurements .once trapped , the ions are cooled to about 50 mk by doppler laser cooling .this 50 mk temperature was obtained by separately measuring the 370 nm cooling transition broadening .the ions were predominantly cooled by the 370 nm laser ( 13 ) and the cooling cycle was closed by combining the 935 nm repump laser ( 6 mw ) with 14ghz modulation of the 370 nm cooling laser ( ) ..,width=226 ] the trap area of the vacuum chamber was covered by a single layer magnetic shield of shielding factor 15 , yielding an interior field of 0.04 gauss . during microwave proving , this residual field was canceled by 3 pairs of helmholtz coils .the dark states were destabilized by a 0.4 gauss field tilted 45 from the laser axis , using the helmholtz coils .the ions were initialized to the state by a cooling laser that is phase - modulated at 2.1 ghz by an electro optical modulator .the 12 ghz microwave clock transition is the hyperfine splitting of the ground state .the ions were coupled to microwaves emitted by a microwave horn .the 12ghz microwave synthesizer was referenced to a hydrogen maser , and the synthesizer can be switched between two phase profiles ( in our case , between 0 and 90 ) using external logic .microwave emission was terminated by a 60 db isolation pin switch .the population ratio of the ions in the excited state ( ) was measured by the electron shelving technique , using the 370 nm laser without 14 ghz modulation .timings of switching lasers , microwave amplitude and phase , 14 ghz modulation and 2 ghz modulation of the cooling laser were precisely controlled by field programable gated array ( fpga ) .the fpga also counted the number of photons , and passed the data to pc for further data processing .we now introduce partial projection measurement ( ppm ) . in ppm ,a 16 diagonal laser is operated at 370 nm ( see ) such that only a portion of the ions interacts with the laser .the waists of the ions and diagonal beam are about equal ( approximately 200 m ) .the basic measurement sequence is as follows .1 . initialize atomic state ( cooling and pumping to ) .2 . manipulate state with microwaves ( details are shown in figures [ fig : pprabi ] and [ fig : ppm ] ) 3 .perform ppm 4 .repeat steps ( ii)-(iii ) .after the first partial measurement , the projected ions became mixed with un - projected ions during manipulation period prior to the next measurement .we note that our ion temperature was sufficiently high enough such that the ion cloud never crystalized .the measured population is valid provided that the ratio of un - projected to projected ions exceeds the snr of the measurement .for example , when there are 10,000 total ions and 100 ions are measured at snr=10 via ppm , 1 % of the total ions are projected at each ppm and will give wrong information of the phase in the next measurement .since this projected ions scatters in the whole ion cloud during step ( ii ) , the ratio of projected ions increases exponentially as , as one repeats the ppm for times . since this projected ions results in phase estimation error, one should limit the to less than 10 measurements , so that the error due to projection is less than the technical noise . in other words ,we maintain snr constant by terminating the apl before the noise due to the projection exceeds the technical noise .the validity of ppm was tested by first measuring the rabi oscillation under ppm .this test evaluates whether ppm can correctly measure a population that is constantly changed by microwave interaction .a ) shows the measurement sequence of normal rabi oscillations obtained under ppm . in standard rabi oscillation measurement ,the atomic state is reset after each measurement cycle , and the probe time is increased at each cycle . in the ppm approach ,the atomic state is not reset , rather another rotation is added in each cycle .the measured result is shown in b ) .if ppm is valid , the red data should align with the blue sinusoidal curve .we observe that ppm is correct up to three measurements , but desynchronizes from the correct population at and beyond the 4th measurement .this decoherence is further discussed in the next section .we now test the decoherence due to ppm in phase measurement sequences . to monitor the total phase difference accumulated over multiple measurement cycles, we must modify the ramsey sequence as well as the projection measurement .our modified ramsey sequence is shown in a ) and proceeds via the following steps : basic sequence is , 1 .initialize once at the beginning of the measurement , 2 .construct a superposition state by applying a rotation ( 0.75 ms ) with 0 phase , 3 .wait for a free precession time , 4 .transfer the phase to the population by applying a rotation with 90 phase , 5 .perform ppm , 6 .revert the population to the original phase by applying a rotation with 90 phase .repeat ( iii ) ( vi ) .this sequence is very similar to that proposed in our previous paper , but is slightly modified to accommodate technical limitation .our 12ghz signal generator can switch phase with the external logic control in only two profiles ( in our case , at 0 and 90 ) , so the 1/2 rotation at the 270 phase was replaced with a 3/2 rotation at the 90 .the 12 ghz microwave signal generator was referenced to a hydrogen maser , and the frequency noise was much smaller than the snr of the measurements . the phase error in a hydrogen maser at 0.1 s averaging time is below 0.02 radians .this implies that the measured zero phase shift between the lo and atoms should always lie within the measurement noise .b ) shows the measured phase over 20 ppm measurements .each datum is the average of 32 measurements and measurement noise ( .2 radians ) is mainly due to scattering of the 370 nm laser .clearly , ppm measures the correct phase over three consecutive measurements , and thereafter deviates from the true phase by more than one sigma . after the 3rd measurement , the atomic phase is abruptly loses coherence and the population ratio rapidly asymptotes toward 70% excitation . together with the rabi oscillations measured by ppm , we conclude that our continuous phase measurement is valid up to three measurements .we consider a simple model , in which a constant number of ions are projected and lose coherence during each measurement .prior to the -th measurement , the proportion of ions whose states are projected , denoted as , is given by , where is the ratio of the number of ions that get projected in a single measurement , normalized by the total number of ions .fitting the data in to eq .( [ eq : decoherence ] ) and an additional fitting parameter ( the amplitude ) yields the solid curve in . from this fitting , is estimated as 18 % . to elucidate the decoherence rate , we simulated the ion motions using a molecular dynamics method .we calculated the motions of 2000 ions trapped in a potential that corresponding to our trap parameters , assuming constant temperature .we confirmed that ions undergo brownian motion with a mean - squared displacement along the optical axis ( where is the initial position at ) , given by where is the diffusion constant of ions and denotes time . simulating this ensemble with t=50 mk, we obtained ] .next , we counted the number of ions passing through the region in which the diagonal laser and ions overlap . at mk ,17 % of the total ions were struck by the measurement laser within 1 ms .this implies that 17 % of the ions were projected and decohered during 1 ms measurement time , consistent with the estimates of .the cause of the abrupt decoherence after the 3rd measurement remains unclear .this section experimentally compares the stability of the standard method ( in which phase is initialized during each cycle ) with that of apl .apl initializes the phase after each sequence of 3 ppms , as shown in a ) .b ) shows the standard deviation in the apl results ( filled red triangles ) and the overall allan deviation ( filled red aquares ) .apl ( ) and apl repetition ( ) are calculated in different ways . for ,frequency error of lo compared to atomic frequency is estimated from the measured total phase as , where n indicates the n - th apl measurement cycle ( , 2 , or 3 ) and is a free precession time of the ramsey sequence . in b ) , the standard deviation in the n - th measurement scales as ( red triangles ) . for ,apl cycles are established , and we calculate and plot the allan deviation from the final ( 3rd ) measurement in each cycle ( red squares ) . the triangles and circles in b ) are calculated from the same data and the slight mismatch at is due to the difference between the standard and allan deviations . for comparison ,the allan deviation of the regular ramsey measurement , in which the phase is initialized during each cycle , is also plotted in b ) .this deviation scales as .based on the measured allan deviations , the stability of apl is improved by a factor of , relative to the standard ramsey cycles .overall , we have demonstrated that apl improves the stability by a factor of , where is the number of continuous apl measurement .the number of valid ppms can be increased by 1 ) reducing the temperature of the ions ( lowering the diffusion coefficient ) , 2 ) trapping more ions ( reducing the proportion of the decohered ions in a single measurement ) , or 3 ) adopting weak measurements . since weak measurement ideally allows us to estimate phase but yet to preserve the phase over multiple measurement cycles , we expect that this last strategy can greatly increase the number of valid ppm .a promising weak measurement scheme is faraday rotation , discussed in detail in our previous paper .although this experiment is nt a demonstration of the actual improvement of a clock performance , as already mentioned , apl via ppm will be useful when the performance of an atomic clock is limited by technical noise . in the past , trapping atoms more than ( snr) was futile .apl via ppm opens a path to utilize more number of atoms than ( snr) for better stability , thereby lowering the stability line in to ppm limits the to ( where is the total number of atoms in the trap ) because the number of measured atoms is adjusted to ( snr) and all of these atoms become decohered during a single measurement . when , eq .computes the qpn limit line .therefore , when apl is based on ppm , the system can not be improved beyond the qpn limit ( ) , except perhaps by weak measurement .however , discussion of the weak measurement limit is beyond the scope of this paper .since we performed the proof - of - principle experiment in the regime that is limited by technical noise , we did nt have to feed the atomic signal back to the lo .our next step would be to perform the apl for the case where lo noise limits the stability , in order to demonstrate the actual improvement of a clock performance . in that case, we would need to evaluate the servo error in the feed back system to the lo frequency carefully .recently , use of multiple atomic traps has been proposed .this scheme shows the same scaling and an overall stability is reduced by a factor of , where is the number of atomic traps .further stability improvements are conceivable if this scheme could be combined with apl .this work was supported by the jst presto program and nict .we thank h. hachisu and t. ido for comments on this manuscript .10 l. essen and j. v. l. parry : nature * 176 * ( 1955 ) 280 .f. riehle:_frequency standards : basics and applications _ ( wiley - vch , whinheim , 2004 ) chap .s. a. diddams , th .udem , j. c. bergquist , e. a. curtis , r. e. drullinger , l. hollberg , w. m. itano , w. d. lee , c. w. oates , k. r. vogel , and d. j. wineland : science * 293 * ( 2001 ) 825 .m. takamoto , f. hong , r. higashi , and h. katori : nature * 435 * ( 2005 ) 321 .d. j. wineland , j. j. bollinger , w. m. itano , f. l. moore , and d. j. heinzen : phys .a * 46 * ( 1992 ) 6797 .j. j. bollinger , w. m. itano , d. j. wineland , and d. j. heinzen : phys .a * 54 * ( 1996 ) 4649 .w. itano , j. c. bergquist , j. j. bollinger , j. m. gilligan , d. j. heinzen , f. l. moore , m. g. raizen , and d. j. wineland : phys .a * 47 * ( 1993 ) 3554 .g. j. dick : proc .19th precise time and time interval , 1987 , p. 133 .g. j. dick , j. d. prestage , c. a. greenhall , and l. maleki : proc .22nd precise time and time interval , 1990 , p. 487 .g. w. biedermann , k. takase , x. wu , l. deslauriers , s. roy , and m. a. kasevich : phys .* 111 * ( 2013 ) 170802 .n. shiga and m. takeuchi : new j. phys .* 14 * ( 2012 ) 023034 .n. f. ramsey : _ molecular beams _ ( clarendon press , oxford , 1956 ) chap . v. p. phoonthong , m. mizuno , k. kido , and n. shiga : in preparation for publication .s. m. olmschenk : phd .thesis , university of michigan ( 2009 ) d. j. berkeland and m. g. boshier : phys .a * 65 * ( 2002 ) 033413 .w. nagourney , j. sandberg , and h. dehmelt : phys .* 56 * ( 1986 ) 2797 .k. okada , t. takayanagi , m. wada , s. ohtani , and h. a. schuessler : phys .a * 81 * ( 2010 ) 013420 .j. borregaard and a. s. srensen : phys .( 2013 ) 090802 .t. rosenband and d. r. leibrandt : arxiv:1303.6357 .
we experimentally demonstrated that the stability of an atomic clock improves at fastest rate ( where is the averaging time ) when the phase of a local oscillator is genuinely compared to the continuous phase of many atoms in a single trap ( atomic phase lock ) . for this demonstration , we developed a simple method that repeatedly monitors the atomic phase while retaining its coherence by observing only a portion of the whole ion cloud . using this new method , we measured the continuous phase over 3 measurement cycles , and thereby improved the stability scaling from to during the 3 measurement cycles . this simple method provides a path by which atomic clocks can approach a quantum projection noise limit , even when the measurement noise is dominated by the technical noise .
given a graph , a proper -coloring of is an assignment of different colors to the vertices of such that two adjacent vertices receive two different colors .the classical graph vertex coloring problem ( gcp ) is to find a proper ( or legal ) -coloring with the minimum number of colors ( i.e. , the chromatic number of ) for a general graph .the minimum sum coloring problem ( mscp ) is a variant of the gcp and aims to determine a proper -coloring while minimizing the sum of the colors assigned to the vertices .mscp was proposed by kubicka in the field of graph theory and by supowit in the field of vlsi design .mscp has applications in vlsi design , scheduling and resource allocation for instance .mscp is also related to other generalizations or variants of gcp like sum multi - coloring , sum list coloring and bandwidth coloring . like the classical vertex coloring problem, mscp is notable for its practical applicability and theoretical intractability .indeed , in the general case , the decision version of mscp is np - complete and approximating the minimum color sum within an additive constant factor is np - hard . as a result , mscp is a computationally challenging problem and any algorithm able to determine the optimal solution of the problem is expected to require an exponential complexity . due to its high computational complexity ,polynomial - time algorithms exist only for some special cases of the problem ( see section [ sec_approximation ] ) and solving the problem in the general case remains an imposing challenge . in the past several decades, much effort has been devoted to developing various approximation algorithms and practical solution algorithms .approximation algorithms aim to provide solutions of provable quality while practical solution algorithms try to find sub - optimal solutions as good as possible within a bounded and acceptable computation time . the class of heuristic and metaheuristic algorithms has been mainly developed since 2009 and has enlarged our capacity of finding improved solutions on the benchmark graphs .representative examples of the existing heuristic algorithms include greedy algorithms , tabu search , breakout local search , iterated local search , ant colony , genetic and memetic algorithms as well as heuristics based on independent set extraction . to the best of our knowledge ,there is only one review published one decade ago in 2004 that focuses on polynomial - time algorithms for specific graphs , mscp generalizations ( or variants ) and applications . for the purpose of solving mscp , the first studies essentially concerned the development of approximation algorithms and simple greedy algorithms .research on practical solution algorithms of mscp was relatively new and appeared around 2009 .nevertheless , important progresses have been made since that time .the purpose of this paper is thus to provide a comprehensive review of the most recent and representative mscp algorithms . to be informative, we identify the general framework followed by the existing heuristic and metaheuristic algorithms and their key ingredients that make them successful . by classifying the main search strategies and putting forward the critical elements of the reviewed methods, we wish to encourage future development of more powerful methods and motivate new applications . in the following sections , we first provide a general definition of mscp , then a brief introduction of approximation algorithms in section [ sec_approximation ] , followed by the presentation of the studied heuristics and metaheuristics in section [ sec_approach ] .section [ sec_upper&lower ] presents lower and upper bounds . before concluding , section [ sec_results ] introduces mscp benchmark instances and summarizes the computational results reported by the best performing algorithms on these instanceslet be a simple undirected graph with vertex set and edge set .a proper -coloring of is a mapping such that , .equivalently , a proper -coloring can be defined as a partition of into mutually disjoint independent sets ( or color classes ) such that .the objective of mscp is to find a proper -coloring with a minimum sum of the colors that are assigned to the vertices of .the minimum sum of colors for mscp is called the _ chromatic sum _ of , and is denoted by . the _ strength_ of a graph is the smallest number of colors over all optimal sum colorings of . obviously , the chromatic number of from the classical vertex coloring problem is a lower bound of , i.e. , .let be the set of all proper -coloring of and the minimization objective ( ) of mscp is given by eq .( [ eq1mscp ] ) . where is the cardinality of and with the chromatic sum given by : figure [ fig_example ] shows an illustrative example for mscp .the graph has a chromatic number of 3 ( left figure ) , but requires 4 colors to achieve the chromatic sum ( right figure ) . indeed , with the given 4-coloring, we achieve the chromatic sum of 15 while the 3-coloring of left figure leads to a suboptimal sum of 18 ( upper bound ) . [cols="^,^ " , ] table [ table_benchmark ] gives the detailed characteristics of the benchmark graphs .columns 25 and 912 indicate the number of vertices , the number of edges , the density and the chromatic number of each graph .columns 67 and 1314 show the best theoretical lower and upper bounds of the chromatic sum ( and respectively ) .underlined entries ( in all tables ) indicate that theoretical upper bounds equal the computational upper bounds while no theoretical lower bound equals the computational lower bound .note that , since the chromatic number of some difficult graphs are still unknown , we use the minimum for which a -coloring has been reported for in the literature instead of to compute and using the equations introduced in section [ subsec_theoreticalbounds ] .based on the benchmark introduced in the previous section , table [ table_many_algorithms ] ( see the appendix ) summarizes the computational results of six representative and effective mscp algorithms presented in section [ sec_approach ] : bls , masc , mds(5)+ls , exscol , ma - mscp and hesa .columns 13 present the tested graph and its best known lower and upper bounds ( and respectively , in bold face when optimality is proved ) , the following 18 columns give the detailed computational results of the six algorithms . `` '' marks for the reference algorithms mean non - available results . the results in terms of solution quality ( best / average lower and upper bounds , and respectively ) are directly extracted from the original papers .computing times are not listed in the table due to the difference of experimental conditions ( platforms , programming languages , stop conditions ... ) .nevertheless , the second and third lines of the heading respectively indicate the main computer characteristic ( processor frequency ) and the stop condition to have an idea of the maximum amount of search used by each approach .note that there is no specific stop condition for exscol since its extraction process ends when the current graph becomes empty .furthermore , some heuristics can halt before reaching the stop criterion , when a known ( lower ) bound is reached for instance . from table[ table_many_algorithms ] , one observes that only hesa reports results for all the 94 graphs of the benchmark . besides , mds(5)+ls , exscol , ma - mscp , and hesa provide lower and upper bounds while bls and masc only give an upper bound .additionally , figure [ fig_comparisons ] provides performance information of each of the six algorithms compared to the best known upper and lower bounds .one observes that no algorithm can reach all the best known results .bls and masc attain the best upper bounds for 17 graphs out of the 27 tested graphs and for 56 graphs out of the 77 tested graphs respectively .mds(5)+ls reaches the best lower ( upper ) bound for 24 ( 26 ) instances out of 38 .exscol reaches the best lower and upper bounds for 38 ( out of 62 graphs ) and 24 ( out of 52 graphs ) respectively .ma - mscp reaches the best lower / upper bound for 51 / 53 graphs out of 81 .hesa equals the best lower ( upper ) bound for 86 ( 85 ) instances out of 94 .since the number of tested graphs differs from one algorithm to another , the performance of these algorithms can not be compared from a statistical viewpoint . however , from table [ table_many_algorithms ] and figure [ fig_comparisons ] , we can roughly conclude that bls , masc , mds(5)+ls , exscol , ma - mscp and hesa are currently the most effective algorithms for solving the mscp problem . from the theoretical and computational bounds reviewed above, we can make the following observations : * optimality is proved for 21 instances out of the 94 tested graphs since the best upper bounds are equal to the best lower bounds ( see entries in bold in table [ table_many_algorithms ] ) ; * 12 theoretical upper bounds equal the computational upper bounds while no theoretical lower bound equals the computational lower bound ( underlined in tables [ table_benchmark][table_many_algorithms ] ) ; * the theoretical upper bounds of queen are equal to the best computational lower bounds meaning optimal results ; * table [ table_many_algorithms ] shows that the best computational lower bounds of some easy graphs ( myciel , , for instance ) are not equal to the optimal upper bounds ( optimality proved with cplex ) . hence , the method of decomposing the graph introduced in section [ subsec_computationalbounds ] is not good enough in some cases and should be improved .this review is dedicated to recent approximation algorithms and practical solution algorithms designed for the minimum sum coloring problem which attracted increasing attention in recent years .mscp is a strongly constrained combinatorial optimization problem which is theoretically important and computationally difficult .in addition to its relevance as a typical model to formulate a number of practical problems , mscp can be used as a benchmark problem to test constraint satisfaction algorithms and solvers . based on this review ,we discuss some perspective research directions .* _ evaluation function and search space : _ as introduced in section [ sec_definitions ] , the aim of mscp is twofold : ( 1 ) find a _ proper _ -coloring of a graph and ( 2 ) ensure that the sum of the colors assigned to the vertices is _minimized_. an evaluation function combining these two objectives has been proposed in : where is the set of conflicting edges in and is a sufficiently large natural number . since the evaluation function is used to guide the heuristic search process , it would be interesting to design other effective evaluation function based on a better recombination of the two parts of .+ another possibility could be to explore only the feasible graph coloring search space , like in the competitive masc and ma - mscp approaches , using more effective ( multi-)neighborhood structures .+ besides , the combination of the above two ingredients in a proper way may lead to improved mscp algorithms . * _ maximum independent sets extraction : _ as shown in section [ subsec_greedyalgorithms ] , exscol is a greedy heuristic based on the independent sets extraction that is quite effective for large graphs .its major deficiency is that it does not include a procedure to reconsider `` bad '' independent sets that has been extracted .hence , one possibility is to devise a backtracking procedure when a `` bad '' independent set has been identified as proposed for the graph coloring problem . *_ exact algorithms : _ there is no exact algorithm especially designed for mscp except the general approach which applies cplex to solve the integer linear programming formulation of mscp .however , as shown in , this approach is only applicable to easy dimacs instances . on the other hand , some exact algorithms for the classical vertex coloring problem successfully solved a subset of the hard dimacs graphs .hence , it would be important to fill the gap by designing exact algorithms for mscp . to conclude , the minimum sum coloring problem , like the classical coloring problem , is a generic and useful model .advances in solution methods ( both exact and heuristic methods ) for these coloring problems will help find satisfying solutions to many practical problems . given the increasing interest in the sum coloring problem and their related coloring problems , it is reasonable to believe that research in these domains will become even more intense and fruitful in the forthcoming years .we are grateful to the anonymous referees for valuable suggestions and comments which have helped us to improve the paper .this work was partially supported by the ligero ( 2009 - 2014 , pays de la loire region ) , pgmo ( 2014 - 2016 , jacques hadamard mathematical foundation ) projects and the national natural science foundation program of china ( grant no .61472147 ) .20 url # 1`#1`urlprefix bar - noy , a. , bellare , m. , halldrsson , m.m . ,shachnai , h. , tamir , t. ( 1998 ) . on chromatic sums and distributed resource allocation .information and computation 140 , 183202 .bar - noy , a. , halldrsson , m.m . , kortsarz , g. , salman , r. , shachnai , h. ( 1999 ) .sum multi - coloring of graphs . in : j.nesetril ( ed . ) , 7th annual european symposium on algorithms , vol .1643 of lecture notes in computer science , springer , berlin / heidelberg , germany , pp .390401 .bar - noy , a. , kortsarz , g. ( 1998 ) . minimum color sum of bipartite graphs . journal of algorithms 28(2 ) : 339365 .benlic , u. , hao , j .- k .a study of breakout local search for the minimum sum coloring problem . in : l. bui ,y. ong , n. hoai , h. ishibuchi , p. suganthan ( eds . ) , simulated evolution and learning , vol .7673 of lecture notes in computer science , springer , berlin / heidelberg , germany , pp .128137 .berliner , a. , bostelmann , u. , brualdi , r.a ., deaett , l. ( 2006 ) .sum list coloring graphs .graphs and combinatorics 22(2 ) : 173183 .bonomo , f. , durn , g. , napoli , a. , valencia - pabon , m. ( 2015 ) .a one - to - one correspondence between potential solutions of the cluster deletion problem and the minimum sum coloring problem , and its application to -sparse graphs .information processing letters 115(68 ) : 600603 .bonomo , f. , valencia - pabon , m. ( 2014 ) . on the minimum sum coloring of -sparse graphs .graphs and combinatorics 30(2 ) : 303314 .bouziri , h. , jouini , m. ( 2010 ) . a tabu search approach for the sum coloring problem .electronic notes in discrete mathematics 36 : 915922 .borodin , a. , ivan , i. , ye , y. , zimny , b. 2012 . on sum coloring and sum multi - coloring for restricted families of graphs .theoretical computer science 418 : 113 .brlaz , d. ( 1979 ) .new methods to color the vertices of a graph .communications of the acm 22(4 ) : 251256 .douiri , s. , elbernoussi , s. ( 2011 ) .new algorithm for the sum coloring problem .international journal of contemporary mathematical sciences 6 : 453463 .douiri , s. , elbernoussi , s. ( 2012 ) .a new ant colony optimization algorithm for the lower bound of sum coloring problem .journal of mathematical modelling and algorithms 11(2 ) : 181192 .galinier , p. , hao , j .- k .hybrid evolutionary algorithms for graph coloring .journal of combinatorial optimization 3(4 ) : 379397 .garey , m.r . ,johnson , d.s .computers and intractability . a guide to the theory of np - completeness , ed .freeman and company , new york .gendreau , m. , potvin , j.y .handbook of metaheuristics .international series in operations research & management science , springer , new york .hajiabolhassan , h. , mehrabadi , m.l . ,tusserkani , r. ( 2000 ) .minimal coloring and strength of graphs .discrete mathematics 215(13 ) : 265270 .halldrsson , m.m . ,kortsarz , g. , shachnai , h. ( 2003 ) .sum coloring interval graphs and k - claw free graphs with applications for scheduling dependent jobs .algorithmica 37 , 187209 .hao , j .- k . ( 2012 ) .memetic algorithms in discrete optimization . in f.neri , c. cotta , p. moscato ( eds . )handbook of memetic algorithms .studies in computational intelligence 379 , chapter 6 , pages 7394 , springer .helmar , a. , chiarandini , m. ( 2011 ) . a local search heuristic for chromatic sum . in l.di gaspero , a. schaerf , t. sttzle ( eds . ) .proceedings of the 9th metaheuristics international conference , pp .161170 .hertz , a. , de werra , d. ( 1987 ) .using tabu search techniques for graph coloring . computing 39 , 345351 .jansen , k. ( 2000 ) .approximation results for the optimum cost chromatic partition problem .journal of algorithms 34(1 ) : 5489 .jiang , t. , west , d. ( 1999 ) .coloring of trees with minimum sum of colors .journal of graph theory 32(4 ) : 354358 .jin , y. , hao , j .- k .hybrid evolutionary search for the minimum sum coloring problem of graphs .information sciences 352-353 , 15-34 jin , y. , hao , j .- k . , hamiez , j.p .a memetic algorithm for the minimum sum coloring problem .computers & operations research 43 : 318327 .johnson , d.s . ,mehrotra , a. , trick m.a .. special issue on computational methods for graph coloring and its generalizations .discrete applied mathematics , 156(2 ) .kokosiski , z. , kwarciany , k. ( 2007 ) . on sum coloring of graphs with parallel genetic algorithms . in b. beliczynski ,a. dzielinski , m. iwanowski , b. ribeiro ( eds . ) , adaptive and natural computing algorithms , vol .4431 of lecture notes in computer science , springer , berlin / heidelberg , germany , pp .211219 .kosowski , a. ( 2009 ) . a note on the strength and minimum color sum of bipartite graphs . discrete applied mathematics 157(11 ) : 2552 - 2554 .kroon , l.g . ,sen , a. , deng , h. , roy , a. ( 1996 ) .the optimum cost chromatic partition problem for trees and interval graphs .international workshop on graph theoretical concepts in computer science , lecture notes in computer science , 1197 : 279292 .kubicka , e. ( 1989 ) . the chromatic sum of a graphthesis , western michigan university .kubicka , e. ( 2004 ) .the chromatic sum of a graph : history and recent developments . international journal of mathematics and mathematical sciences 30 : 15631573 .kubicka , e. ( 2005 ) .polynomial algorithm for finding chromatic sum for unicyclic and outerplanar graphs .ars combinatoria 76 : 193-201 .kubicka , e. , kubicki , g. , kountanis , d. ( 1991 ) .approximation algorithms for the chromatic sum .first great lakes computer science conference on computing in the 90 s , lecture notes in computer science , 507 : 1521 .kubicka , e. , schwenk , a.j .an introduction to chromatic sums .proceedings of the 17th conference on acm annual computer science conference , csc 89 , pages 3945 , new york , ny , usa . acm .leighton , f.t .a graph coloring algorithm for large scheduling problems .journal of research of the national bureau of standards 84(6 ) : 489506 .li , y. , lucet , c. , moukrim , a. , sghiouer , k. ( 2009 ) .greedy algorithms for the minimum sum coloring problem .logistique et transports conference , https://hal.archives-ouvertes.fr/hal-00451266/document malafiejski , m. ( 2004 ) .sum coloring of graphs , in : m. kubale ( ed . ) , graph colorings , vol .352 of contemporary mathematics , american mathematical society , new providence ( rhode island ) usa , pp .malafiejski , m. , giaro , k. , janczewski , r. , kubale , m. ( 2004 ) . sum coloring of bipartite graphs with bounded degree .algorithmica 40(4 ) : 235244 .moukrim , a. , sghiouer , k. , lucet , c. , li , y. ( 2010 ) .lower bounds for the minimal sum coloring problem .electronic notes in discrete mathematics 36 : 663670 .moukrim , a. , sghiouer , k. , lucet , c. , li . , y. ( 2013 ) .upper and lower bounds for the minimum sum coloring problem , submitted for publication .nicoloso , s. , sarrafzadeh , m. , song , x. ( 1999 ) . on the sum coloring problem on interval graphs .algorithmica 23 , 109-126 .salavatipour , m.r .( 2003 ) . on sum coloring of graphs .discrete applied mathematics 127(3 ) : 477488 .sen , a. , deng , h. , guha , s. ( 1992 ) . on a graph partition problem with applicationto vlsi layout .information processing letters 43(2 ) : 8794 .supowit , k.j .( 1987 ) . finding a maximum planar subset of a set of nets in a channel .ieee trans . comput . aided design cad 6(1 ) : 9394 .thomassen , c. , erds , p. , alavi , y. , malde , p. , schwenk , a. ( 1989 ) .tight bounds on the chromatic sum of a connected graph .journal of graph theory , 13 : 353357 .wang , y. , hao , j .- k . , glover , f. , l , z. ( 2013 ) .solving the minimum sum coloring problem via binary quadratic programming .corr abs/1304.5876 .wu , q. , hao , j .- k .an effective heuristic algorithm for sum coloring of graphs .computers & operations research 39(7 ) : 15931600 .wu , q. , hao , j .- k .( 2012 ) . improving the extraction and expansion method for large graph coloring .discrete applied mathematics 160(16 - 17 ) : 23972407 .wu , q. , hao , j .- k .( 2013 ) . improved lower bounds for sum coloring via clique decomposition .corr abs/1303.6761 .for the purpose of completeness , this appendix , which reproduces and extends the results given in , shows a performance summary of the six main heuristic algorithms for the set of 94 dimacs benchmark graphs in terms of the lower and upper bounds of the mscp problem .+ & & & & & & & & & & & & + & & & & & & & & & & & & & & + & & & & & & & & & & & & & & + name&&&&&&&&&&&&&&&&&&&&&&&&&& + + & & & & & & & & & & & & + & & & & & & & & & & & & & & + & & & & & & & & & & & & & & + name&&&&&&&&&&&&&&&&&&&&&&&&&& + myciel3&16&21&&21&21.0&&21&21.0&&16&21&&16&16.0&21&21.0&&16&16.0&21&21.0&&16&16.0&21&21.0 + myciel4&34&45&&45&45.0&&45&45.0&&34&45&&34&34.0&45&45.0&&34&34.0&45&45.0&&34&34.0&45&45.0 + myciel5&70&93&&93&93.0&&93&93.0&&70&93&&70&70.0&93&93.0&&70&70.0&93&93.0&&70&70.0&93&93.0 + myciel6&142&189&&189&196.6&&189&189.0&&142&189&&142&142.0&189&189.0&&142&139.5&189&189.0&&142&142.0&189&189.0 + myciel7&286&381&&381&393.8&&381&381.0&&286&381&&286&286.0&381&381.0&&286&277.5&381&381.0&&286&286.0&381&381.0 + anna&273&276&&276&276.0&&276&276.0&&273&276&&273&273.0&283&283.2&&273&273.0&276&276.0&&273&273.0&276&276.0 + david&234&237&&237&237.0&&237&237.0&&234&237&&229&229.0&237&238.1&&234&234.0&237&237.0&&234&234.0&237&237.0 + huck&*243*&*243*&&243&243.0&&243&243.0&&243&243&&243&243.0&243&243.8&&243&243.0&243&243.0&&243&243.0&243&243.0 + jean&216&217&&217&217.0&&217&217.0&&216&217&&216&216.0&217&217.3&&216&216.0&217&217.0&&216&216.0&217&217.0 + homer&1129&1150&&-&-&&1155&1158.5&&-&-&&-&-&-&-&&1129&1129.0&1157&1481.9&&1129&1129.0&1150&1151.8 + queen5.5&*75*&**&&75&75.0&&75&75.0&&75&75&&75&75.0&75&75.0&&75&75.0&75&75.0&&75&75.0&75&75.0 + queen6.6&126&138&&138&138.0&&138&138.0&&126&138&&126&126.0&150&150.0&&126&126.0&138&138.0&&126&126.0&138&138.0 + queen7.7&*196*&**&&196&196.0&&196&196.0&&196&196&&196&196.0&196&196.0&&196&196.0&196&196.0&&196&196.0&196&196.0 + queen8.8&288&291&&291&291.0&&291&291.0&&288&291&&288&288.0&291&291.0&&288&288.0&291&291.0&&288&288.0&291&291.0 + queen8.12&*624*&**&&-&-&&624&624.0&&-&-&&-&-&-&-&&624&624.0&624&624.0&&624&624.0&624&624.0 + queen9.9&405&409&&-&-&&409&410.5&&-&-&&-&-&-&-&&405&405.0&409&411.9&&405&405.0&409&409.0 + queen10.10&550&553&&-&-&&-&-&&-&-&&-&-&-&-&&550&550.0&553&555.2&&550&550.0&553&553.6 + queen11.11&726&733&&-&-&&-&-&&-&-&&-&-&-&-&&726&726.0&733&735.4&&726&726.0&733&734.4 + queen12.12&936&943&&-&-&&-&-&&-&-&&-&-&-&-&&936&936.0&944&948.7&&936&936.0&943&947.0 + queen13.13&1183&1191&&-&-&&-&-&&-&-&&-&-&-&-&&1183&1183.0&1192&1197.0&&1183&1183.0&1191&1195.4 + queen14.14&1470&1482&&-&-&&-&-&&-&-&&-&-&-&-&&1470&1470.0&1482&1490.8&&1470&1470.0&1482&1487.3 + queen15.15&1800&1814&&-&-&&-&-&&-&-&&-&-&-&-&&1800&1800.0&1814&1823.0&&1800&1800.0&1814&1820.1 + queen16.16&2176&2193&&-&-&&-&-&&-&-&&-&-&-&-&&2176&2176.0&2197&2205.9&&2176&2176.0&2193&2199.4 + school1&2439&2674&&-&-&&-&-&&-&-&&-&-&-&-&&2345&2283.3&2674&2766.8&&2439&2418.9&2674&2674.0 + school1-nsh&2176&2392&&-&-&&-&-&&-&-&&-&-&-&-&&2106&2064.6&2392&2477.1&&2176&2169.4&2392&2392.0 + miles250&318&325&&327&328.8&&325&325.0&&318&325&&318&316.2&328&333.0&&318&318.0&325&325.4&&318&318.0&325&325.0 + miles500&686&705&&710&713.3&&705&705.0&&686&712&&677&671.4&709&714.5&&686&686.0&708&711.2&&686&686.0&705&705.8 + miles750&1145&1173&&-&-&&-&-&&-&-&&-&-&-&-&&1145&1145.0&1173&1183.9&&1145&1145.0&1173&1173.6 + miles1000&1623&1666&&-&-&&-&-&&-&-&&-&-&-&-&&1623&1623.0&1679&1697.3&&1623&1623.0&1666&1670.5 + miles1500&3239&3354&&-&-&&-&-&&-&-&&-&-&-&-&&3239&3239.0&3354&3357.2&&3239&3239.0&3354&3354.0 + fpsol2.i.1&*3403*&*3403*&&-&-&&3403&3403.0&&3151&3403&&3403&3403.0&-&-&&3403&3403.0&3403&3403.0&&3403&3403.0&3403&3403.0 + fpsol2.i.2&*1668*&*1668*&&-&-&&1668&1668.0&&-&-&&-&-&-&-&&1668&1668.0&1668&1668.0&&1668&1668.0&1668&1668.0 + fpsol2.i.3&*1636*&*1636*&&-&-&&1636&1636.0&&-&-&&-&-&-&-&&1636&1636.0&1636&1636.0&&1636&1636.0&1636&1636.0 + mug88_1&164&178&&-&-&&178&178.0&&164&178&&164&162.3&-&-&&-&-&-&-&&164&164.0&178&178.0 + mug88_25&162&178&&-&-&&178&178.0&&162&178&&162&160.3&-&-&&-&-&-&-&&162&162.0&178&178.0 + mug100_1&188&202&&-&-&&202&202.0&&188&202&&188&188.0&-&-&&-&-&-&-&&188&188.0&202&202.0 + mug100_25&186&202&&-&-&&202&202.0&&186&202&&186&183.4&-&-&&-&-&-&-&&186&186.0&202&202.0 + 2-insert_3&55&62&&-&-&&62&62.0&&55&62&&55&55.0&-&-&&-&-&-&-&&55&55.0&62&62.0 + 3-insert_3&84&92&&-&-&&92&92.0&&84&92&&84&82.8&-&-&&-&-&-&-&&84&84.0&92&92.0 + inithx.i.1&*3676*&*3676*&&-&-&&3676&3676.0&&3486&3676&&3676&3676.0&-&-&&3676&3616.0&3676&3679.6&&3676&3675.3&3676&3676.0 + inithx.i.2&*2050*&*2050*&&-&-&&2050&2050.0&&-&-&&-&-&-&-&&2050&1989.2&2050&2053.7&&2050&2050.0&2050&2050.0 + inithx.i.3&*1986*&*1986*&&-&-&&1986&1986.0&&-&-&&-&-&-&-&&1986&1961.8&1986&1986.0&&1986&1986.0&1986&1986.0 + mulsol.i.1&*1957*&*1957*&&-&-&&1957&1957.0&&-&-&&-&-&-&-&&1957&1957.0&1957&1957.0&&1957&1957.0&1957&1957.0 + mulsol.i.2&*1191*&*1191*&&-&-&&1191&1191.0&&-&-&&-&-&-&-&&1191&1191.0&1191&1191.0&&1191&1191.0&1191&1191.0 + mulsol.i.3&*1187*&*1187*&&-&-&&1187&1187.0&&-&-&&-&-&-&-&&1187&1187.0&1187&1187.0&&1187&1187.0&1187&1187.0 + mulsol.i.4&*1189*&*1189*&&-&-&&1189&1189.0&&-&-&&-&-&-&-&&1189&1189.0&1189&1189.0&&1189&1189.0&1189&1189.0 + mulsol.i.5&*1160*&*1160*&&-&-&&1160&1160.0&&-&-&&-&-&-&-&&1160&1160.0&1160&1160.0&&1160&1160.0&1160&1160.0 + zeroin.i.1&*1822*&*1822*&&-&-&&1822&1822.0&&-&-&&-&-&-&-&&1822&1822.0&1822&1822.0&&1822&1822.0&1822&1822.0 + zeroin.i.2&*1004*&*1004*&&-&-&&1004&1004.0&&1004&1004&&1004&1004.0&-&-&&1004&1002.1&1004&1004.0&&1004&1004.0&1004&1004.0 + zeroin.i.3&*998*&*998*&&-&-&&998&998.0&&998&998&&998&998.0&-&-&&998&998.0&998&998.0&&998&998.0&998&998.0 + wap05&12449&13656&&-&-&&13669&13677.8&&-&-&&12428&12339.3&13680&13718.4&&-&-&-&-&&12449&12438.9&13656&13677.8 + wap06&12454&13773&&-&-&&13776&13777.8&&-&-&&12393&12348.8&13778&13830.9&&-&-&-&-&&12454&12431.6&13773&13777.6 + wap07&24800&28617&&-&-&&28617&28624.7&&-&-&&24339&24263.8&28629&28663.8&&-&-&-&-&&24800&24783.6&29154&29261.1 + wap08&25283&28885&&-&-&&28885&28890.9&&-&-&&24791&24681.1&28896&28946.0&&-&-&-&-&&25283&25263.4&29460&29542.3 + qg.order30&*13950*&**&&-&-&&13950&13950.0&&-&-&&13950&13950.0&13950&13950.0&&13950&13950.0&13950&13950.0&&13950&13950.0&13950&13950.0 + qg.order40&*32800*&**&&-&-&&32800&32800.0&&-&-&&32800&32800.0&32800&32800.0&&32800&32800.0&32800&32800.0&&32800&32800.0&32800&32800.0 + qg.order60&*109800*&**&&-&-&&109800&109800.0&&-&-&&109800&109800.0&110925&110993.0&&109800&109800.0&109800&109800.0&&109800&109800.0&109800&109800.0 + dsjc125.1&247&326&&326&326.9&&326&326.6&&238&326&&246&244.1&326&326.7&&247&244.6&326&327.3&&247&247.0&326&326.1 + dsjc125.5&549&1012&&1012&1012.9&&1012&1020.0&&493&1015&&536&522.4&1017&1019.7&&549&541.0&1013&1018.5&&549&548.5&1012&1012.2 + dsjc125.9&1691&2503&&2503&2503.0&&2503&2508.0&&1621&2511&&1664&1592.5&2512&2512.0&&1689&1677.7&2503&2519.0&&1691&1691.0&2503&2503.0 + dsjc250.1&570&970&&973&982.5&&974&990.5&&521&977&&567&562.0&985&985.0&&569&558.4&983&995.8&&570&569.2&970&980.4 + dsjc250.5&1287&3210&&3219&3248.5&&3230&3253.7&&1128&3281&&1270&1258.8&3246&3253.9&&1280&1249.4&3214&3285.5&&1287&1271.6&3210&3235.6 + dsjc250.9&4311&8277&&8290&8316.0&&8280&8322.7&&3779&8412&&4179&4082.4&8286&8288.8&&4279&4160.9&8277&8348.8&&4311&4279.4&8277&8277.2 + dsjc500.1&1250&2836&&2882&2942.9&&2841&2844.1&&1143&2951&&1250&1246.6&2850&2857.4&&1241&1214.9&2897&2990.5&&1250&1243.4&2836&2836.0 + dsjc500.5&2923&10886&&11187&11326.3&&10897&10905.8&&2565&11717&&2921&2902.6&10910&10918.2&&2868&2797.7&11082&11398.3&&2923&2896.0&10886&10891.5 + dsjc500.9&11053&29862&&30097&30259.2&&29896&29907.8&&9731&30872&&10881&10734.5&29912&29936.2&&10759&10443.8&29995&30361.9&&11053&10950.1&29862&29874.3 + dsjc1000.1&2762&8991&&9520&9630.1&&8995&9000.5&&2456&10123&&2762&2758.6&9003&9017.9&&2707&2651.2&9188&9667.1&&2719&2707.6&8991&8996.5 + dsjc1000.5&6708&37575&&40661&41002.6&&37594&37597.6&&5660&43614&&6708&6665.9&37598&37673.8&&6534&6182.5&38421&40260.9&&6582&6541.3&37575&37594.7 + dsjc1000.9&26557&103445&&-&-&&103464&103464.0&&23208&112749&&26557&26300.3&103464&103531.0&&26157&24572.0&105234&107349.0&&26296&26150.3&103445&103463.3 + dsjr500.1&2069&2156&&-&-&&-&-&&-&-&&-&-&-&-&&2061&2052.9&2173&2253.1&&2069&2069.0&2156&2170.7 + dsjr500.1c&15398&16286&&-&-&&-&-&&-&-&&-&-&-&-&&15025&14443.9&16311&16408.5&&15398&15212.4&16286&16286.0 + dsjr500.5&22974&25440&&-&-&&-&-&&-&-&&-&-&-&-&&22728&22075.0&25630&26978.0&&22974&22656.7&25440&25684.1 + flat300_20_0&1531&&&-&-&&3150&3150.0&&-&-&&1524&1505.7&3150&3150.0&&1515&1479.3&3150&3150.0&&1531&1518.2&3150&3150.0 + flat300_26_0&1548&3966&&-&-&&3966&3966.0&&-&-&&1525&1511.4&3966&3966.0&&1536&1501.6&3966&3966.0&&1548&1530.3&3966&3966.0 + flat300_28_0&1547&4238&&-&-&&4238&4313.4&&-&-&&1532&1515.3&4282&4286.1&&1541&1503.9&4261&4389.4&&1547&1536.5&4260&4290.0 + flat1000_50_0&6601&&&-&-&&25500&25500.0&&-&-&&6601&6571.8&25500&25500.0&&6433&6121.5&25500&25500.0&&6476&6452.1&25500&25500.0 + flat1000_60_0&6640&30100&&-&-&&30100&30100.0&&-&-&&6640&6600.5&30100&30100.0&&6402&6047.7&30100&30100.0&&6491&6466.5&30100&30100.0 + flat1000_76_0&6632&37164&&-&-&&37167&37167.0&&-&-&&6632&6583.2&37167&37213.2&&6330&6074.6&38213&39722.7&&6509&6482.8&37164&37165.9 + le450_5a&1193&&&-&-&&1350&1350.0&&-&-&&-&-&-&-&&1190&1171.5&1350&1350.0&&1193&1191.5&1350&1350.0 + le450_5b&1189&&&-&-&&1350&1350.0&&-&-&&-&-&-&-&&1186&1166.5&1350&1350.0&&1189&1185.0&1350&1350.1 + le450_5c&1278&&&-&-&&1350&1350.0&&-&-&&-&-&-&-&&1272&1242.3&1350&1350.0&&1278&1270.4&1350&1350.0 + le450_5d&1282&&&-&-&&1350&1350.0&&-&-&&-&-&-&-&&1269&1245.2&1350&1350.0&&1282&1274.2&1350&1350.0 + le450_15a&2331&2632&&-&-&&2706&2742.6&&-&-&&2329&2313.7&2632&2641.9&&2329&2324.3&2681&2733.1&&2331&2331.0&2634&2648.4 + le450_15b&2348&2632&&-&-&&2724&2756.2&&-&-&&2343&2315.7&2642&2643.4&&2348&2335.0&2690&2730.6&&2348&2348.0&2632&2656.5 + le450_15c&2610&3487&&-&-&&3491&3491.0&&-&-&&2591&2545.3&3866&3868.9&&2593&2569.1&3943&4048.4&&2610&2606.6&3487&3792.4 + le450_15d&2628&3505&&-&-&&3506&3511.8&&-&-&&2610&2572.4&3921&3928.5&&2622&2587.2&3926&4032.4&&2628&2627.1&3505&3883.1 + le450_25a&3003&3153&&-&-&&3166&3176.8&&-&-&&2997&2964.4&3153&3159.4&&3003&3000.4&3178&3204.3&&3003&3003.0&3157&3166.7 + le450_25b&3305&3365&&-&-&&3366&3375.1&&-&-&&3305&3304.1&3366&3371.9&&3305&3304.1&3379&3416.2&&3305&3305.0&3365&3375.2 + le450_25c&3657&4515&&-&-&&4700&4773.3&&-&-&&3619&3597.1&4515&4525.4&&3638&3617.0&4648&4700.7&&3657&3656.9&4553&4583.8 + le450_25d&3698&4544&&-&-&&4722&4805.7&&-&-&&3684&3627.4&4544&4550.0&&3697&3683.2&4696&4740.3&&3698&3698.0&4569&4607.6 + latin_sqr_10&40950&41444&&-&-&&41444&41481.5&&-&-&&40950&40950.0&42223&42392.7&&-&-&-&-&&40950&40950.0&41492&41672.8 + c2000.5&15091&132483&&-&-&&-&-&&-&-&&15091&15077.6&132515&132682.0&&-&-&-&-&&14498&14442.9&132483&132513.9 + c4000.5&33033&473234&&-&-&&-&-&&-&-&&33033&33018.8&473234&473211.0&&-&-&-&-&&31525&31413.3&513457&514639.0 + games120&442&443&&443&443.0&&443&443.0&&442&443&&442&441.4&443&447.9&&442&442.0&443&443.0&&442&442.0&443&443.0 +
the minimum sum coloring problem ( mscp ) is a variant of the well - known vertex coloring problem which has a number of ai related applications . due to its theoretical and practical relevance , mscp attracts increasing attention . the only existing review on the problem dates back to 2004 and mainly covers the history of mscp and theoretical developments on specific graphs . in recent years , the field has witnessed significant progresses on approximation algorithms and practical solution algorithms . the purpose of this review is to provide a comprehensive inspection of the most recent and representative mscp algorithms . to be informative , we identify the general framework followed by practical solution algorithms and the key ingredients that make them successful . by classifying the main search strategies and putting forward the critical elements of the reviewed methods , we wish to encourage future development of more powerful methods and motivate new applications . * keywords * : sum coloring , approximation algorithms , heuristics and metaheuristics , local search , evolutionary algorithms .
in spite of many efforts , only a limited understanding has been achieved regarding the emergence of collective order in non - equilibrium systems . while these systems often present features analogous to the those found in equilibrium , such as phase - transitions and long - range correlations , it is not clear how to use the powerful tools available in equilibrium statistical mechanics to analyze their properties . to overcome this problem, it may be useful to consider cases where simple qualitative characteristics ( e.g. the existence and order of a phase transition ) are common to different non - equilibrium systems with similar properties .a class of non - equilibrium systems that has sparked increasing interest in recent years is given by models of groups of swarming agents .these are used to describe the collective behavior of self - propelled agents such as schools of fish , flocks of birds , or herds of quadrupeds .even the simplest of these models displays large - scale organized structures , in which agents separated by distances much larger than their interaction ranges can coordinate and swarm in the same direction .if noise is added to the system , this ordered state is destroyed as the noise level increases .when the noise reaches a critical value , the system undergoes a phase transition to a disordered state where agents move in random directions .this phase transition has been quite thoroughly analyzed through numerical simulations .however , the lack of a systematic theoretical approach to non - equilibrium systems has hindered a proper characterization of the order - disorder phase transition , and there are still doubts about its basic features even in this simplest of cases .in this paper we are particularly interested in how this phase transition might be affected by the way in which the noise is introduced in the system .two different types of noise , which we will call _ extrinsic _ and _ intrinsic _ , have recently been considered in models of swarming . in these models , at every time step each particle receives a signal from its neighbors that tells the particle in which direction to move next .the extrinsic noise consists in that the signal received by the particle is blurred ( because , say , the environment is not completely transparent and the particle can not see its neighbors very well ) . as a consequence ,the particle may move in a different direction to the one dictated by the neighbors .in contrast , in the intrinsic noise case each particle receives the signal sent by the neighbors perfectly , but then it may `` decide '' to do something else and move in a different direction .thus , the extrinsic noise can be thought of as produced by a blurry environment , whereas the intrinsic noise comes from the `` free will '' of the particles , so to speak ; namely , from the uncertainty in the particle s decision mechanism . in either case , of course , the net result is that , at every time step , the particle may move in a direction that departs from the one dictated by the neighbors .it has been pointed out that these two distinct types of noise , extrinsic and intrinsic , can produce very different order - disorder phase transitions .this can been shown analytically using a network approach in which the elements , instead of interacting with the neighbors in a physical space , interact with any element that is linked to them through a network connection .this kind of description has been used to model a large range of dynamics , such as the traffic between internet websites or servers , the evolution of an epidemic outbreak , the mechanisms triggered by gene expressions in the cell , or the activity of the brain . in the context of swarming systems , the network approach is equivalent to a mean - field theory in which correlations between the particles are not taken into account . however , this approach has the virtue that it allows us to separate clearly the dynamical interaction rule that determines the dynamical state of the particles , from the topology of the underlying network that develops in time and space and dictates who interacts with who .therefore , under the network approach it is possible to focus on the effects that the two different types of noise have on the dynamics of the system .even when the network approach leaves aside some important aspects of the dynamics of swarming systems ( such as correlations in space and time ) , some appealing analogies can be established between the swarming and network systems .indeed , in the simplest swarming models , the dynamics is defined by giving to each agent a steering rule that uses the velocities of all agents in its vicinity as an input to compute its own velocity for the next time - step .this algorithm can be associated to a dynamics on a switching network that links at every time - step all agents that are within the interaction range of each other . in this context , the network is simply a representation of the spatial dynamics of the system .however , it has been shown that this analogy can be pushed further successfully and that a static network with long - range connections can capture some of the main qualitative behaviors of simple swarming models . in this paper, we compare the properties of the phase transitions and dynamical mappings of two kinds of network models to further explore the analogies described above .we consider models that incorporate three of the main aspects of the interaction between the particles in swarms : an average input signal from the neighbors , noise , and , in some sense , extremely long - range interactions . in the first kind of model the elements of the network can acquire only two states , + 1 and -1 ; whereas in the second , the elements are represented by 2d vectors whose angles take any value between 0 and .we find that swarming systems and their network counterparts indeed present qualitatively similar behaviors depending on whether the noise is intrinsic or extrinsic .we also determine numerically that the same qualitative features arise when the particles are placed on a small - world network , and we extend our results to the case in which the network models are subject to both types of noise .the paper is organized as follows . in sec .[ sec : vicsek ] we present the model introduced by vicsek and his group to describe the emergence of order in swarming systems . in particular , we focus our attention on how the phase transition seems to change when the noise changes from intrinsic to extrinsic . in section [ sec : voter ] we present a majority voter model on a network , which is reminiscent of the ising model with discrete internal degrees of freedom .this model is simple enough as to be treated analytically , at least for the case of homogeneous random network topologies for which we show analytically that the two types of noise indeed produce two different types of phase transition . in sec .[ sec : vectorial ] we introduce another network model in which the internal degrees of freedom are continuous ( 2d vectors ) .this model can be treated analytically in the limit of infinite network connectivity . however , these results and extensive numerical simulations clearly indicate that the two types of noise again produce two different phase transitions , which are analogous to the ones observed in the majority voter model and in the self - propelled model . in sec . [sec : mean - field ] we discuss the mean - field assumptions conveyed in the two network models and how they relate to the self - propelled model .we also show that the nature of the phase transition produced by each type of noise does not change when the small - world topology is implemented , which produces strong spatial correlations between the network elements .finally , in section [ sec : conclusions ] we summarize our results .arguably , the simplest model to describe the collective motion of a group of organisms was proposed by vicsek and his collaborators . in this model , particles move within a 2d box of sides with periodic boundary conditions .the particles are characterized by their positions , , and their velocities , ( represented here as complex numbers ) .all the particles move with the same speed . however , the direction of motion of each particle changes in time according to a rule that captures in a qualitative way the interactions between organisms in a flock .the basic idea is that each particle moves in the average direction of motion of the particles surrounding it , _ plus some noise_. two interaction rules have been considered in the literature , which differ in the way the noise is introduced into the system . to state these rules mathematically , we need some definitions .let be the circular vicinity of radius centered at , and be the number of particles whose positions are within at time .we will denote as the average velocity of the particles which at time are within the vicinity , namely for reasons that will be clear later , we will call _ the input signal _ received by the -th particle . with the above definitions , the interaction rule originally proposed by vicsek __ can be written as [ eq : spmin ] + \eta\xi_n(t ) , \label{eq : vicsek}\\ \vec{v}_n(t+\delta t ) & = & ve^{i\theta_n(t+\delta t)},\\ \vec{x}_n(t+\delta t ) & = & \vec{x}_n(t ) + \vec{v}_n(t+\delta t)\delta t,\label{eq : kinetic1}\end{aligned}\ ] ] where is a random variable uniformly distributed in the interval ] .the `` angle '' function is defined in such a way that if , then angle = \theta ] if , and sign = 1 ] that represents the probability for each individual to go against the majoritarian opinion .the above interaction rule can also be written in a simpler form as + \frac{\xi_n(t)}{1-\eta}\right ] , \label{eq : vicsekvoter2}\ ] ] where is a random variable uniformly distributed in the interval ] . ) for both types of noise , intrinsic and extrinsic , the majority voter model exhibits a phase transition from ordered to disordered states .however , the nature of this phase transition ( i.e. whether continuous or discontinuous ) depends on the type of noise . to see that this is indeed the case ,we define the order parameter for the majority voter model as in the limit , the order parameter reaches a stationary value that depends on the noise intensity .[ fig : phasevoterk3](a ) shows as a function of for the intrinsic noise case [ eq .] , in a system with and .it is apparent that in this case the phase transition is continuous .this result is consistent with the behavior of the spmin reported in fig .[ fig : phasevicsek](a ) .contrary to the above , the phase transition for the majority voter model with extrinsic noise [ eq .] is discontinuous , as is shown in fig .[ fig : phasevoterk3](b ) , which is also consistent with the behavior observed in fig .[ fig : phasevicsek](b ) for the spmen . thus , changing the way in which the noise is introduced in the voter model also changes drastically the nature of the phase transitionthe majority voter model is simple enough to be treated analytically .we can even generalize the model to incorporate the two types of noise simultaneously . in this generalization ,the value of each element is updated according to the dynamical rule + \frac{\zeta_n(t)}{1-\eta_2}\right ] , \label{eq : votergeneral}\ ] ] where and are independent random variables uniformly distributed in the interval ] .thus , if and , only the intrinsic noise is present , whereas if and only the extrinsic noise is present .intermediate cases are obtained if both and are different from zero . in what followswe consider separately the case in which is finite , and the case in which is infinite . in appendixa we present a mean - field calculation showing that , when the network connectivity is finite , the order parameter satisfies the dynamical mapping [ eqs : mappingvoter ] where and the coefficients are given by ^{k - m } \left[\sin\lambda\right]^{m } \frac{\sin(4k\eta_1\lambda)}{\lambda^2}d\lambda.\ \ \\label{eq : betas}\end{aligned}\ ] ] in the calculations that leads to the set of equations one assumes that the network elements are statistically independent and equivalent ( see appendix a ) .these assumptions hold as long as the inputs of each element are chosen randomly from anywhere in the system , namely , for the homogeneous random topology . for other topologies that introduce correlations between the network elements , such as the small - world or the scale - free topologies ,the mean - field assumptions do not necessarily apply .however , when they do apply , the order parameter given in eq . becomes the sum of independent and equally distributed random variables .therefore , the determination of becomes analogous to determining the average position of a 1d biased random walk , which can be solved exactly .the stable fixed points of eq .give the stationary values of the order parameter .it is clear from eqs . that is always a fixed pointhowever , its stability depends on the values of and . additionally , from eq .it follows that for even values of ( because in such a case the integrand in that equation is an odd function ) .therefore , the polynomial in eq .contains only odd powers of and thus , for each fixed point , the opposite value is also a fixed point . to illustrate the formalism ,we present here a detailed analysis of the simple case . however , the results are similar for any other finite value of . for , the integrals in eq . can be easily computed ( we used mathematica ) and eq . becomes [ eqs : mk3 ] where to determine the stability of the fixed points of the mapping we have to analyze the value of the derivative . if at the fixed point , then that fixed point is stable .otherwise , it is unstable .we further divide our presentation in three cases .let us first show that the phase transition is discontinuous for the case in which , namely , when there is no intrinsic noise and only the extrinsic noise is present . under these circumstances , the fixed - point equation becomes using eqs . and , it is easy to see that and are solutions of the fixed - point equation provided that ( in addition to the trivial solution which is always a fixed point ) . from eqs . , and we obtain that it follows from the above expression that in the region , which shows that the fixed points and are stable in this region .for the fixed points disappear and the only fixed point that remains is . let us compute now the stability of the fixed point . from eqs .and we get from which it follows that for , whereas for .the stability analysis presented above reveals that , when , the stable fixed points discontinuously transit from to as crosses the critical value from below .therefore , the phase transition in this case is discontinuous , as is shown in fig .[ fig : phasevoterk3](b ) .we consider now the case in which , that is , when only intrinsic noise is present . taking the limit in eqs .and one gets and .therefore , in this case eq . becomes let us start by analyzing the stability of the trivial fixed point . from the above equation we get which it follows that only for .therefore , the disordered state characterized by the fixed point is stable only for .as decreases below the critical value , the disordered phase becomes unstable and two stable non - zero fixed points appear .assuming , the fixed point equation can be solved for obtaining a stability analysis reveals that the above fixed points are stable for ( in this region ) and unstable for ( because in this other region ) . summarizing , the stable fixed points for the case are where . this resultis plotted in fig .[ fig : phasevoterk3](a ) ( solid line ) , from which it is apparent that the phase transition in the majority voter model with only intrinsic noise is indeed continuous .additionally , for values of below , but close to , the critical value at which the phase transition occurs , the order parameter behaves as , which shows that this phase transition belongs to the mean - field universality class .when both types of noise , intrinsic and extrinsic , are present in the system , the phase transition is always continuous for any finite value of . to illustrate this we present in fig .[ fig : mvoterk3](a ) the graph of for and different values of .note that is a monotonically increasing convex function , and therefore the nonzero stable fixed point appears continuously as decreases . the same happens if we now fix the value of and vary the value of , as it is shown in fig .[ fig : mvoterk3](b ) .this behavior is typical of a second order phase transition .assuming and using eq ., the fixed point equation can be solved for obtaining for this equation to have real solutions the quantity inside the curly brackets must be positive . from eq .it follows that for any positive value of .therefore , eq . has real solutions only if the values of and for which the equality holds in the above expression determine the critical line on the - plane at which the phase transition occurs .[ fig : pvoterkfinite ] shows surface plots for the ( positive ) value of the stable fixed point as a function of and for , , and .interestingly , for any finite value of the phase transition is _ always _ continuous except for the special case .therefore , for any finite , even a small amount of intrinsic noise suffices to make the phase transition continuous . in appendixa we show that for the temporal evolution of the order parameter is still given by the dynamical mapping eq . , where now is fig .[ fig : mvoterkinf](a ) shows the behavior of for , and different values of , and fig .[ fig : mvoterkinf](b ) shows the same kind of plots but now keeping and varying the value of .it can be seen from fig .[ fig : mvoterkinf](a ) that the phase transition is discontinuous . indeed ,as decreases below the critical value , the non - zero stable fixed point appears discontinuously ( see the point indicated with an arrow in the figure ) .an analogous behavior occurs in fig .[ fig : mvoterkinf](b ) when reaches the value .thus , in the limit the phase transition is _ always _ discontinuous ( see fig . [fig : pvoterkfinite ] ) . in this sense ,the discontinuity in the phase transition observed when only extrinsic noise is used can be considered as a singular limit , either or , of a phase transition that is otherwise continuous .the second network model that we analyze , which we will call the _ vectorial network model _ ( vnm ) , is much closer to the self - propelled model than the voter model presented in the previous section . as we will see later, the vnm corresponds to a mean - field theory of the self - propelled model .it consists of a network with nodes ( or elements ) which , as in the self - propelled model , are the two dimensional vectors ( represented as complex numbers ) .all the vectors have the same magnitude but their orientations in the plane can change .each vector is connected to a fixed set of other vectors , , from which will receive an input signal .we will call this set _ the inputs _ of , and consider again the homogeneous random topology in which all the elements have exactly inputs chosen randomly from anywhere in the system .the input signal received by from its inputs is defined as for the interaction between the network elements we consider from the beginning a dynamic rule that already incorporates both types of noise , intrinsic and extrinsic : +\eta_2\zeta_n(t ) , \label{eq : voterrule}\ ] ] where and are independent random variables uniformly distributed in the interval ] appearing in the second integral of eq . can be replaced by a dirac delta function radially centered at .this leads to from this equation it follows that the order parameter , which is the first radial moment of , obeys the dynamical mapping ( see details in appendix b ) the integral on the right - hand side of the above equation is an instance of the weber - schafheitlin integrals .after the evaluation of this integral , the dynamical mapping for the order parameter can be written as [ eqs : mapveckinfty ] where the mapping is ^ 2\right ) & \mbox { if } & \psi< \eta_1 \\ & & \\ \frac{\sin(\pi\eta_2)}{\pi\eta_2}{}_2f_1\left(\frac{1}{2},-\frac{1}{2};1,\left[\frac{\eta_1}{\psi}\right]^2\right ) & \mbox { if } & \psi > \eta_1 \end{array}\right .\label{eq : mvectorial2}\ ] ] and are hypergeometric functions .the fixed points of give the stationary value of the order parameter . in ref . we have shown that for , the non - trivial fixed point of this mapping appears discontinuously as crosses the critical value from above . on the other hand , for is a global factor which does not change the discontinuous appearance of the non - trivial fixed point .therefore , for any values of and , the phase transition is discontinuous .[ fig : psiveckinfty ] shows surface plots of the stationary value as a function of and for two different cases : ( i ) when the dynamics of the vnm start out from disordered initial conditions [ in eq . ] , and( ii ) when the dynamics start out from ordered initial conditions [ in eq . ] .let us denote as and the stationary values of the order parameter obtained in each of the two cases mentioned above , respectively .it is apparent from fig .[ fig : psiveckinfty ] that and are equal in a large region of the - parameter space . however , there is also a region in which and are different .this latter region , where the system shows hysteresis , is shown in fig .[ fig : diffpsiveckinfty ] , in which the difference is plotted as a function of and .the region for which is a region of metastability where two stable fixed points exist , the trivial one and the non - zero fixed point .it is clear from these results that the vnm with exhibits a discontinuous phase transition for any non - zero value of the extrinsic noise . on the other hand , for , the amount of order in the system decreases as increases .however , there is no phase transition in this case since only when reaches its maximum value .( in ferromagnetic systems , would correspond to infinite temperature . ) in other words , for infinite connectivity and zero extrinsic noise , the order in the system can never be destroyed by the intrinsic noise , unless it reaches its maximum value . however , in the presence of both types of noise , extrinsic and intrinsic , the phase transition is always discontinuous .as it was mentioned before , we do not have an analytic solution of eq . for finite values of .nonetheless , numerical simulations show a phase transition that is continuous in one region of the - parameter space , and discontinuous in another region .this is qualitatively different from the phase transition observed in the majority voter model , which was always continuous for any finite value of ( except for ) .[ fig : psiveckfinite ] shows as a function of and for different values of .the results reported in this figure were obtained through numerical simulations of the vnm for systems with .the figures on the left correspond to disordered initial conditions ( ) , whereas those on the right correspond to ordered initial conditions ( ) .the difference as a function of and is plotted on fig .[ fig : diffpsiveckfinite ] .it is apparent from these figures that , except for the case , there is a region of hysteresis where .this region grows as increases , but it does not seem to cross the square \times[0,1] ] , from which it follows that substituting into eq .the results given in eqs .and we obtain ^k \frac{\sin(4k\eta_1\lambda)}{4k\eta_1}\nonumber\\ & = & \sum_{m=0}^k ( -i)^m\binom{k}{m } [ \cos\lambda]^{k - m}[\psi(t)\sin\lambda]^{m}\nonumber \\ & & \times\frac{\sin(4k\eta_1\lambda)}{4k\eta_1\lambda}. \label{eq : psfourierbinomvoter}\end{aligned}\ ] ] since is the probability that , then therefore , taking the inverse fourier transform of eq . and integrating the result from 0 to we obtain ^m , \label{eq : p+1}\ ] ] where the coefficients are given by ^{k - m}[\sin\lambda]^{m}\nonumber\\ & & \times\frac{\sin(4k\eta_1\lambda)}{\lambda}e^{i\lambda x } d\lambda dx .\label{eq : apbeta1}\end{aligned}\ ] ] although it is not obvious from the above expression , it happens that . to show that this is indeed the case ,let us define the function as ^k\frac{\sin(4k\eta_1\lambda)}{4k\eta_1\lambda}.\ ] ] note that is a symmetric function and that . with this definition, the coefficient can be written as where is the inverse fourier transform of . since is symmetric , then is also symmetric and therefore .additionally , since then , from which it follows that . for we can exchange the order of integration in eq . by multiplying the integrand by . after performing the integral over and then taking the limit we obtain ^{k - m}[\sin\lambda]^{m } \frac{\sin(4k\eta_1\lambda)}{\lambda^2 } d\lambda.\end{aligned}\ ] ] using the fact that , eq . can be written as ^m\right ) .\label{eq : apultima}\ ] ] finally , substituting the above result into eq .we obtain ^m .\label{eq : apfinalpsi}\ ] ] by definition [ see eq . ] is the sum of independent and identically distributed variables , each with average and variance ^ 2 ] ^ 2)}\right ) } { \sqrt{2\pi k(1-[\psi(t)]^2)}}.\ ] ] with this approximation , eq . becomes ^ 2)}}\\ & \times & \int_{-4k\eta_1}^{4k\eta_1 } \exp\left(-\frac{(x - y - k\psi(t))^2}{2k(1-[\psi(t)]^2)}\right)dy,\end{aligned}\ ] ] where we have used the fact that is a constant normalized function defined in the interval ] and .let us define the extrinsic noise vector and the intrinsic noise as we will denote as the pdf ( in polar coordinates ) of the extrinsic noise and as the pdf of the intrinsic noise . as for the majority voter model , it is convenient to define the quantities let , and be the polar coordinates of the vectors , , and , respectively .we will denote as , , and the pdf s of these three vectors , respectively . in table[ tab:1 ] we summarize the relevant quantities appearing in this calculation ..notation guide for the different quantities involved in the calculation of the phase transition of the vnm .we have omitted the subscript in the pdf s since we assume that all the network elements are statistically equivalent . [cols="<,^,^,^",options="header " , ] note that neither nor depend on time or on the subscript of , whereas the pdf s of , , and depend on both time and the subscript .however , in a mean - field approximation , we can assume that all the vectors are _ statistically equivalent _ and _ statistically independent_. in this case , the functions , , and are site independent , ( i.e. , the same for all the vectors in the network ) and the subscript can be omitted . from now on we will assume that the conditions for the validity of the mean - field approximation ( statistical equivalence and independence ) apply . to measure the amount of order in the system , we define the instantaneous order parameter as where we have defined .under the mean - field assumption , all the vectors are equally distributed with the common probability distribution .then can be computed as follows .let be the fourier transform ( in polar coordinates ) of .the variables and are the fourier conjugates of the variables and , respectively .a cumulant expansion of up to the first order gives where is the vector in fourier space whose polar coordinates are . denoting as the angle between and , and using the fact that , eq .( [ eq : param1 ] ) can be written as thus , a first order cumulant expansion of directly gives us the order parameter .the objective of the calculation is to find a recurrence relation in time for based on eq .( [ eq : chaterule ] ) . from this recurrence relationwe will obtain the dynamical mapping that determines the temporal evolution of .note first that , since for all , then can be written as where is the pdf of the angle of . from eq .( [ eq : chaterule ] ) it follows that , and are related through {n_i}(\zeta)d\zeta .\label{eq : mainintegral}\ ] ] since , and each of the vectors is distributed with the pdf , it is clear that depends on .therefore , eq . ( [ eq : mainintegral ] ) is a recurrence relation in time for .this recurrent relation is best solved in fourier space . denoting as the fourier transform of , the above equation can be written as since , and are periodic functions of their angular arguments ( , and respectively ), we can expand these functions in fourier series as where , and are given by substituting eqs . and into eq .( [ eq : mainintegralfourier ] ) , carrying out the integration over and taking into account eq . we obtain where we have used the integral representation of the bessel function .it follows from the last expression that exchanging the order of integration in the last expression , and using the identity where and are the dirac and kronecker delta functions , respectively , eq . ( [ eq : phichi1 ] ) becomes note that eq .( [ eq : phichi2 ] ) is a consequence of the recurrence relation given in eq .( [ eq : mainintegral ] ) , which in turn follows directly from the dynamic interaction rule eq .( [ eq : chaterule ] ) .now we have to project the probability distribution function onto the unit circle by forcing the vector to have unit length at time , and thus becoming .to do so , we take the fourier transform of given in eq .( [ eq : pv ] ) , which when evaluated at time gives substituting into the above equation the form of given in eq .( [ eq : pthetaphi ] ) ( evaluated at ) , we obtain where we have used the integral representation of the bessel function .now we use the value of given in eq .( [ eq : phichi2 ] ) , which leads to to complete the projection of onto the unit circle in a closed form , it only remains to find as a function of .since , and we are assuming that all the are statistically independent , then ^k \hat{p}_{\vec{n}_e}(\lambda,\gamma),\ ] ] where is the fourier transform of the pdf of the noise vector .since is uniformly distributed in the interval ] as ^k \approx \exp\left\{-ik\langle \vec{v}(t)\rangle\cdot\vec{\lambda}- \frac{k}{2}\vec{\lambda}\cdot{\mathbf c}(t)\cdot \vec{\lambda}\right\},\ ] ] where and are the first moment and covariance matrix of , respectively , and is the vector in fourier space with polar coordinates . with this approximation , eq .( [ eq : recurrencefinal ] ) becomes making the change of variable in the above expression , we obtain we can go a step further in the large- approximation and neglect the term appearing in the exponent inside the integral of the last expression , which gives }. \label{eq : yacasi}\end{aligned}\ ] ] now , we can write , where and are the angles in fourier space of and , respectively .the second integral on the right - hand side of eq .( [ eq : yacasi ] ) becomes }d\gamma ' & = & \int_{0}^{2\pi}e^{-i\left[m\gamma'+ |\langle \vec{v}(t)\rangle|x\cos(\gamma'-\alpha)\right]}d\gamma ' \\ & = & e^{-im\alpha}\int_{0}^{2\pi}e^{-i\left[m\tau+ |\langle \vec{v}(t)\rangle|x\cos \tau\right]}d\tau \\ & = & e^{-im\alpha}2\pi(-i)^m j_m\left(|\langle \vec{v}(t)\rangle|x\right ) \\ & = & e^{-im\alpha}2\pi(-i)^m j_m\left(\psi(t)x\right),\end{aligned}\ ] ] where we have used the fact that . substituting this result into eq .( [ eq : yacasi ] ) , we obtain now , recalling that is a constant normalized function in the interval $ ] , with , its fourier transform is given by thus , and eq .can be written as finally , expanding both sides of the above equation up to the first order in , and recalling eq .( [ eq : radialmoment ] ) for the left - hand side , we obtain the recurrence relation for the order parameter this is eq . of the main text .
we analyze order - disorder phase transitions driven by noise that occur in two kinds of network models closely related to the self - propelled model proposed by vicsek et . al . to describe the collective motion of groups of organisms [ _ phys . rev . lett . _ * 75*:1226 ( 1995 ) ] . two different types of noise , which we call intrinsic and extrinsic , are considered . the intrinsic noise , the one used by vicsek et . al . in their original work , is related to the decision mechanism through which the particles update their positions . in contrast , the extrinsic noise , later introduced by grgoire and chat [ _ phys . rev . lett . _ * 92*:025702 ( 2004 ) ] , affects the signal that the particles receive from the environment . the network models presented here can be considered as the mean - field representation of the self - propelled model . we show analytically and numerically that , for these two network models , the phase transitions driven by the intrinsic noise are continuous , whereas the extrinsic noise produces discontinuous phase transitions . this is true even for the small - world topology , which induces strong spatial correlations between the network elements . we also analyze the case where both types of noise are present simultaneously . in this situation , the phase transition can be continuous or discontinuous depending upon the amplitude of each type of noise .
optimal solutions for quantum information processing tasks typically require observables that can not be described by single selfadjoint operators but are formalized as positive - operator - valued measures ( povms ) . generally , an element of an observable , called an _ effect _ , can be any positive operator bounded by the identity operator .for instance , an optimal observable for unambiguous discrimination of two non - orthogonal pure states has three elements and none of them is a projection .another example is provided by informationally complete observables , which do not have any non - trivial projections as their elements .it is well known that two projections can be elements of a single observable if and only if they commute .this condition for effects to be parts of a single observable is called _ coexistence _ .coexistence can be therefore viewed as a kind of natural generalization of commutativity .it is remarkable that two effects can be coexistent even if they do not commute , but a general criterion of coexistence is not known .this problem of characterizing coexistent effects , called the _ coexistence problem _ , is the topic of this paper .the coexistence of effects is connected to the theoretical limitations built inside the quantum theory , and the concept of coexistence provides a unifying framework for these kinds of issues .indeed , many theoretical limitations , related both to the foundations and to quantum information processing tasks , can be seen as a consequence of ( non-)coexistence of the relevant effects .for instance , the security of bennett - brassard 1984 ( bb84 ) protocol relies on the non - coexistence of the corresponding effects .moreover , assuming that the bell inequality is violated , the coexistence of certain effects would lead to the possibility of superluminal communication .coexistence ( contrary to commutativity ) also explains the possibility of unsharp joint measurements of complementary pairs of physical quantities , such as orthogonal spin components or path and interference of an atomic beam .a joint measurement of such pairs is possible only if an increased unsharpness is accepted , and a relevant coexistence condition can be then interpreted as a trade - off relation between the imprecisions of the corresponding measurements .some recent investigations on this issue are reported , for instance , in . in this workwe give a complete characterization of the previously stated coexistence problem in the case of two qubit effects . in sec .[ sec : problem ] we recall the coexistence problem in a precise formulation . in sec .[ sec : fundamental ] we present the main result of this paper a characterization theorem of coexistent pairs of qubit effects .we also show that the already known special cases are easily recovered from our theorem . a detailed proof of the characterization theorem is given in the appendixes . in appendix 1 we recall some general facts on coexistence which are needed in our investigation .appendix 2 then concentrates on the details of the proof .let be a complex separable hilbert space .an operator on is an _ effect _ if for all . in terms of operator inequalitiesthis reads where and are the zero operator and the identity operator , respectively .we denote by the set of effects .an _ observable _ is a normalized - effect - valued measure , also called a positive - operator - valued measure ( povm ) .it is defined on a measurable space , where is the set of measurement outcomes and is the -algebra of possible events . for each event , the observable attaches an effect . if the system is in a vector state and a measurement of is performed , the probability of getting a measurement outcome belonging to an event is .detailed explanations and many examples of this generalized description of quantum observables can be found in . for a singleton set ,we denote .if the set of measurement outcomes is countable , then is determined by the set of effects , .namely , a general effect corresponding to an event is recovered by formula in particular , an observable with a finite number of measurement outcomes ( say , ) can be described as a list .the povm normalization condition then reads we can also look on the structure of observables from the other side : given a collection of effects , we can ask whether they originate in a single observable .this concept , called coexistence , was first studied by ludwig .[ def : definition of coexistence ] effects are _ coexistent _ if there exists an observable and events such that if two effects and are coexistent , we denote .it is essential to note that in definition [ def : definition of coexistence ] the events need not be disjoint .as an example , let be the symmetric informationally complete qubit observable consisting of four effects \ , , \\ { \mathsf{f}}_2 & = & \frac{1}{4 } \left [ { i}+\frac{1}{\sqrt{3}}(-\sigma_x-\sigma_y+\sigma_z ) \right ] \ , , \\{ \mathsf{f}}_3 & = & \frac{1}{4 } \left [ { i}+\frac{1}{\sqrt{3}}(-\sigma_x+\sigma_y-\sigma_z ) \right ] \ , , \\{ \mathsf{f}}_4 & = & \frac{1}{4 } \left [ { i}+\frac{1}{\sqrt{3}}(\sigma_x-\sigma_y-\sigma_z ) \right ] \ , .\end{aligned}\ ] ] the fact that is an observable implies that the effects , are coexistent . indeed, we get and similarly for the other two effects . actually , this reasoning leads also to a proof of the informational completeness of as we can conclude that a measurement of provides the same information as three separate measurements of the orthogonal spin components .this example should be compared with the fact that any two projections and with do not commute and hence are not coexistent . in this paperwe concentrate on the following _ coexistence problem _ : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ given an effect , characterize all effects which are coexistent with it . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the following simple observation shows that when we are studying the coexistence of two effects ( as in this paper ) , we can restrict ourselves to four outcome observables .[ prop : g4 ] effects and are coexistent if and only if there exists an observable with four outcomes such that by definition , if a four outcome observable satisfying eq .( [ eq : g4 ] ) exists , then and are coexistent .assume then that and are coexistent and let be an observable such that for some .we denote and , and we set , , , and .this defines an observable with the required properties .if is a projection ( i.e. ) , then the answer to the coexistence problem is simple and well known : an effect is coexistent with exactly when .generally , however , a characterization of coexistent effects is not known . in the next section we present a full solution to the coexistence problem in the case of a qubit system , i.e. two dimensional hilbert space .qubit effects and can be parametrized by vectors in the following way : [ definition of a and b ] here is the vector of pauli matrices , and we have denoted , . note that from eqs .( [ definition of a ] ) and ( [ definition of b ] ) it follows that .we are now considering to be fixed and we are looking for all effects ( hence all parameters and ) , which are coexistent with . in order to formulate the characterization theorem , we first introduce the following function from to ] . in this casethe difference is negative in between the two roots , and these solutions do not correspond to a vector in . as the last step , we take a look at as a quadratic polynomial of .first of all , it is non - negative at and , and its two roots , labeled by and such that , belong to the interval ] , the discriminant is negative and therefore all are in the allowed region as discussed earlier . the scaling property from proposition [ prop : scaling ]says that all effects having will be coexistent with as well .therefore , whenever , all vectors of length are in the allowed region .what is left is to check the case .we assume that and we show that in this case the solutions lie inside the interval ] , ] .then is a continuous function and its domain is a connected region in .a direct calculation shows that the equation implies thus , can take the value only on the boundary of its domain . from the continuity of and the connectedness of its domain then follows , that if for one point in the domain we have , then in the whole domain .an analogous equation to eq .allows similar reasoning for the lower limit . on the other hand, we have , which is inside the interval .we thus conclude that ] can be shown in a similar way .the fact that for all vectors are in the allowed region leads us to the definition of sharpness for effects in eq .( [ eq : sharpness ] ) we define the sharpness as . we then conclude that * if , the whole boundary is formed by vectors of length in other words , in this case the allowed region is a circle with diameter and the center at , corresponding to ( c1 ) ; * if , the boundary is given by vectors of length if and only if , corresponding to ( c2 ) .let us assume , for instance , that a 3ci case defining a boundary point is formed by the intersection of the circles 1 , 2 , and 4 , i.e. , the circles 1 , 2 and 4 intersect in a single point which is inside the circle 3 ( see fig . [ tri ] ) .looking at fig .[ fig : jedna ] , points common to circles 1 and 2 are and .the first one is not closer to circle 4 than the second one .therefore , if the circles 1 , 2 , and 4 have a single common point , it must be .let us define the following function to compare the distance of from the center of the circle 4 and its radius , if the point lies on circle 4 for some , then .moreover , if the point lies inside ( outside ) circle 4 , then [ .if , then there exists an interval ( , ) where . since we have assumed that the common point of circles 1 , 2 , and 4 is inside circle 3 , there exists such that the four circles intersect in a region with nonzero area .this is in contradiction with proposition [ main idea ] . a similar reasoning rules out the case .therefore , a necessary condition for a 3ci case is the set of equations and .the coordinates for are and . by making the substitution we can express the distance as a function of the new variable in the form where , , and .equation leads to a unique solution . putting this into equation , we get . substituting back the definitions for , and we finally obtain that a necessary condition for this particular 3ci is =0\ , .\label{3ci condition}\ ] ] if the expression in the first bracket is zero , we obtain the condition for 2ci of circles 1 and 4 .if the second bracket is zero , circles 2 and 4 are one inside the other ( if , then circle 4 is inside circle 2 , and it is the opposite if ) .their intersection is then the whole smaller circle and this fact does not depend on .we can then disregard the larger circle completely , because the intersection does not depend on it in any respect .the 3ci is thus reduced to 2ci and can not therefore define boundary points different from those found in the previous section dealing with 2ci . in the same way , one finds out that the three other possible 3ci cases are similar and always lead to boundary points defined by a 2ci intersection .the resulting conditions , analogous to eq . , are summarized in table [ tab1 ] for all four possible 3ci ..[tab1]necessary conditions for all four possible three circle intersections .the three circles intersecting in a single point are given in the first column .the necessary condition and its geometrical meaning for a 3ci defining the boundary are in the second column .[ cols= " < , < " , ] let us assume that the first two conditions in ( c3 ) hold .we show that then the right - hand side of eq . defines the perpendicular component of vectors forming the boundary of the allowed region .a four point intersection can occur if one of the points and coincides with one of the points and . since , a single point intersection must be such that points and coincide .putting their coordinates to be equal we obtain the solution for , \ , .\label{gamma ac}\ ] ] this solution represents the four circle intersection if the intersection points and exist .point exists if , while point exists if . using these conditions ,we conclude that eq . represents a case = if and only if the following condition is fulfilled : under the first two conditions in ( c3 ) , these inequalities are fulfilled .first of all , a straightforward calculation shows that .hence , the inequality guarantees that .it is then easy to verify that the .therefore , if , then eq . holds .putting equal the coordinates for the points and we finally obtain the equation of the coordinate as a function of , ^ 2\}}\\+ & \frac{1}{2a}\sqrt{((2-\alpha)^2-a^2)\{a^2-[\alpha(1-\beta)+a { b_{\parallel}}]^2\ } } \ , .\label{blue curve } \end{split}\ ] ] this can be rewritten in the form given in eq . . finally , we look at the situation where the vectors and are parallel . in this casethe effects and commute , and this implies that they are coexistent . to check their coexistence directly from definition [ def : definition of coexistence ] , one can use a four outcome observable defined as on the other hand , the fact that and are parallel means that . clearly , the conditions ( c1)(c3 ) do not then lead to any restrictions .this completes the proof of theorem [ th : fundamental ] .we have seen in step 3 that for , condition is fulfilled and therefore the expression in eq .determines a vector in the allowed region .we know from the 2ci case that for , vector of length is not in the allowed region . from thisfollows that the length of vector must be shorter than .we can also see this directly from the expression for .if we denote , we get after some algebraic manipulation that * implies , * implies that either or .the first point shows that the vector does not reach the length of anywhere inside the interval and therefore the inequality in eq. holds . on the other hand , since in the two points , the continuity of as a function of implies that it reaches the minimum value somewhere inside the interval .the second point shows that it happens at .our work has been supported by projects conquest no .mrtn - ct-2003 - 505089 , qap no .2004-ist - fetpi-15848 , and apvv no .rpeu-0014 - 06 .
we characterize all coexistent pairs of qubit effects . this gives an exhaustive description of all pairs of events allowed , in principle , to occur in a single qubit measurement . the characterization consists of three disjoint conditions which are easy to check for a given pair of effects . known special cases are shown to follow from our general characterization theorem .
with the wide and ever - growing availability of multi - core processors it is evermore compelling to design and develop more efficient concurrent data structures .the non - blocking concurrent data structures are more attractive than their blocking counterparts because of obvious reasons of being immune to deadlocks due to various fault - causing factors beyond the control of the data structure designer .lock - free data structures provide guarantee of deadlock / livelock freedom with fault tolerance and are usually faster than wait - free ones which provide additional guarantee of starvation freedom at the cost of increased programming complexity . in literature , there are lock - free as well as wait - free singly linked - lists , lock - free doubly linked - list , lock - free hash - tables and lock - free skip - lists . however , not many performance - efficient non - blocking concurrent search trees are available . a multi - word compare - and - swap ( ` mcas ` ) based lock - free bst implementationwas presented by fraser in .however , ` mcas ` is not a native atomic primitive provided by available multi - core chips and is very costly to be implemented using single - word ` cas ` .bronson et al . proposed an optimistic lock - based partially - external bst with relaxed balance .ellen et al . presented lock - free external binary search tree based on co - operative helping technique presented by barnes .though their work did not include an analysis of complexity or any empirical evaluation of the algorithm , the contention window of update operations in the data structure is large .also , because it is an external binary search tree , delete are simpler at the cost of extra memory to maintain internal nodes without actual values .howley et al .presented a non - blocking internal binary search tree based on similar technique . a software transactional memory based approach was presented by crain et al . to design a concurrent red - black tree .while this approach seems to outperform some coarse - grained locking methods , they are easily vanquished by a carefully tailored locking scheme as in .recently , two lock - free external bsts and a lock - based internal bst have been proposed .all of these works lack comprehensive theoretical complexity analysis . in an internal bst , add operations start from the root and finish at a leaf node , where the new element is inserted . to remove a node which has both the children present , its successor or predecessor is shifted to take its place .a common predicament for the existing lock - free bst algorithms is that if multiple modify operations contend at a leaf node , and , if a remove operation among them succeeds then all other operations have to restart from the root .it results in the complexity of a modify operation to be where is the height of the bst on nodes and is the measure of contention . it may grow dramatically with the growth in the size of the tree and the contention .in addition to that , contains operations have to be aware of ongoing remove of nodes with both children present , otherwise , it may return invalid results , and , hence in the existing implementations of lock - free internal bst , they also may have to restart from the root on realizing that the return may be invalid if the other subtree is not scanned .the external or partially - external bsts remain immune to the latter problem at a cost of extra memory for the routing internal nodes .our algorithm solves both these problems elegantly .the contains operations in our bst enjoy oblivion of any kind of modify operation as long as we do not put them to help a concurrent remove , which may be needed only in write - heavy situations . also , the modify operations after helping a concurrent modify operation restart not from the root rather from a level in the vicinity of failure .it ensures that all the operations in our algorithm run in .this is our main contribution .we always strive to exploit maximum possible disjoint - access - parallelism in a concurrent data structure .the lock - free methods for bst , in order to use single - word ` cas ` for modifying the outgoing links atomically , and yet maintain correctness , store a flag as an operation field or some version indicator in the node itself , and hence a modify operation `` holds '' a node .this way of holding a node , specifically for a remove , can reduce the progress of two operations which may remain non - conflicting if modifying two outgoing links concurrently . in , a flag is stored in a link instead of a node in an external bst .we chase this problem of just holding a link for an internal bst .we find that it is indeed possible that a remove operation , instead of holding the node , just holds the links connected to and from a node in a determined order so that maximum possible progress of two concurrent operations , working at two disjoint memory words corresponding to two links can be ensured .we take the business of `` storing a flag '' to the link level from the node level which significantly improves the disjoint - access - parallelism .this is our next contribution . helping mechanism which ensures non - blocking progress may prove counterproductive to the performance if not used judiciously .however , at times , if the proportion of remove operations increases , which may need help to finish their pending steps , it is better to help them , so that the traversal path does not contain large number of `` under removal '' nodes .keeping that in view , we take helping to a level of adaptability to the read - write load : we provide choice over whether an operation , during its traversal , helps an ongoing remove operation .we believe that this adaptive conservative helping in internal bsts may be very useful in some situations .this is a useful contribution of this work .our algorithm requires only single - word atomic ` cas ` primitives along with single - word atomic read and write which now exist in practically all the widely available multi - core processors in the market . based on our design ,we implement a set adt .we prove that our algorithm is linearizable .we also present complexity analysis of our implementation which is lacking in existing lock - free bst algorithms .this is another contribution in this paper .the body of our paper will further consist of the following sections . in section [ secexiststruct ] ,we present the basic tree terminologies . in section [ secalgo ] ,the proposed algorithm is described .section [ secanalysis ] presents a discussion on the correctness and the progress of our concurrent implementation along with an amortized analysis of its step complexity .the paper is concluded in section [ secconclude ] .a _ binary tree _ is an ordered tree in which each _ node _ has a _ left child _ and a _ right child _ denoted as and respectively , either or both of which may be _external_. when both the children are external the node is called a _ leaf _ , with one external child a _unary _ node and with no external child a _ binary _ node , and , all these non - external nodes are called _ internal _ nodes .we denote the parent of a node by and there is a unique node called _ root _ s.t .each parent is connected with its children via pointers as links ( we shall be often using the term pointer and link interchangeably when the context will be understood ) .we are primarily interested in implementing an ordered _ set _ adt - _ binary search tree _ using a binary tree in which each node is associated with a unique key selected from a totally ordered universe .a node with a key is denoted as and if the context is otherwise understood .determined by the total order of the keys , each node has a _ predecessor _ and a _ successor _ , denoted as and , respectively .we denote height of by , which is defined as the distance of the deepest leaf in the subtree rooted at from .+ we focus on _ internal bsts _ ,in which , all the internal nodes are _ data - nodes _ and the external nodes are usually denoted by .there is a _symmetric order _ of arranging the data - all the nodes in the _ left subtree _ of have keys less than and all the nodes in its _ right subtree _ have keys greater than , and so no two nodes can have the same key . to query if the bst contains a data with key , at every _ search - step _we utilize this order to look for the desired node either in the left or in the right subtree of the _ current node _ if not matched at it , unless we reach an external node . on reaching an external nodewe return , else , if the key matches at a node then we return or address of the node if needed . to add data , we query by its key . on reaching an external nodewe replace this node with a new leaf node . to remove a data - node corresponding to key we check whether is in the bst .if the bst does not contain , is returned . on finding a node with key , we perform delete as following .if it is a leaf then we just replace it with an external node .in case of a unary node its only child is connected to its parent . for a binary node , it is first replaced with or , which may happen to be a leaf or a unary node , and then the replacer is removed .+ in an alternate form - an _ external bst _ , all the internal nodes are _ routing - nodes _ and the external nodes are data - nodes . in this paperwe focus on internal bsts , and hence forward , by a bst we shall mean an internal bst .to implement a lock - free bst , we represent it in a threaded format . in this format, if the left or the right child pointers at is null and so corresponds to an external node , it is instead connected to or , respectively .some indicator is stored to indicate whether a child - link is used for such a connection .this is called _ threading _ of the child - links . in our design ,we use the null child pointers at the leaf and unary nodes as following - right child pointer , if null , is threaded and is used to point to the successor node , whereas , a similar left child pointer is threaded to point to the node itself , see fig . [ bst](a ) . in this representation a binary tree can be viewed as an ordered list with exactly two outgoing and two incoming pointers per node , as shown in fig .[ bst](b ) . also , among two incoming pointers , exactly one is threaded and the other is not .further , if and are two nodes in the bst and there is no node such that then the interval ] containing two child pointers : ~:=~left(x) ] , ( c ) a and ( d ) a .the bit sequence corresponding to boolean variables overlapping three stolen bits from a link is represented as .one or more bits together can be set or unset using a single atomic ` cas ` operation over the pointer containing it .the node structure is shown in lines [ decbegin ] to [ decend ] .we use two dummy nodes as global variables represented by a two member array of node - pointers .the keys and are stored in the two members of and they can never be deleted .node ] , see line [ gv ] and figure [ bst_node_category ] ( c ) . the set operations - contains , add and remove , need to perform a predecessor query using a given key to locate an interval ] , associated with a threaded link , containing key . if locate returns 2 then key is present in the bst and therefore add returns .if it returns 0 or 1 then we create a new node containing the key .its left - link is threaded and connected to itself and right - link takes the value of the link to which it needs to be connected , line [ lem5_2 ] .note that when a new node is added , both its children links are threaded .also , its backlink is pointed to the node .the link which it needs to connect to is modified in one atomic step to point to the new node using a ` cas ` .if the ` cas ` succeeds then is returned . on failure, it is checked whether the target link was flagged , marked or another add operation succeeded to insert a new node after we read the link . in case a new node is inserted , we start locating for a new proper link starting with the location comprising nodes at the two ends of the changed link . on failure due to marking or flagging of the current link , the concurrent remove operation is helped .recovery from failure due to a flagging or marking by a concurrent remove operation makes it to go one link back following the backlink of . after locating the new proper link ,add is retried .in this section we first show that executions in our algorithm produce a correct bst with linearizable operations .then we prove the lock - free progress property and finally we discuss its amortized step complexity . herewe present a proof - sketch of correctness , due to space constraints .a detailed proof can be found in the technical report .first we give classification of nodes in a bst implemented by our algorithm .[ def1 ] a node is called logically removed if its right - link is marked and s.t . either or .a node is called physically removed if s.t . either or or .a node is called regular if it is neither logically removed nor physically removed .a node ever inserted in has to fit in one of the above categories . at a point in the history of executions, the bst formed by our algorithm contains elements stored in regular nodes or logically removed nodes .before we show that the return values of the set operations are consistent with this definition according to their linearizability , we have to show that the operations work as intended and the structure formed by the nodes operated with these operations maintains a valid bst .we present some invariants maintained by the operations and the nodes in lemmas [ lem1 ] to [ lem12 ] .[ lem1 ] if a locate , , returns and terminates at ] is threaded . c. is not physically deleted .[ lem2 ] a contains operation returns true if and only if the key is located at a non - physically removed node .[ lem3 ] an add always happens at an unmarked and unflagged threaded link .[ lem4 ] a remove always starts by flagging the incoming order - link to a node .[ lem5 ] when a node is inserted , both its left- and right- links both threaded .[ lem6 ] before a node is logically removed its incoming order - link is flagged and its prelink points to its correct order - node .[ lem8 ] backlink of a node always points to a non physically removed node .[ lem9 ] an unthreaded left - link can not be both flagged and marked .[ lem10 ] a right - link can not be both flagged and marked .[ lem11 ] a link once marked never changes again .[ lem12 ] if a node gets logically removed then it will be eventually physically removed .lemma [ lem1 ] follows from the lines [ locsuc ] and [ term2 ] .contains returns only if locate returns 2 and that happens only if is non - physically removed at line [ cmp ] during its execution , this proves lemma [ lem2 ] . in the case of a key match ,add returns and otherwise it tries adding the key using an atomic ` cas ` . if the ` cas ` fails , it always uses locate to find the desired link before retrying the ` cas ` to add the new node . from this observation and using [ lem1](b ) ,lemma [ lem3 ] follows . by lemma [ lem1 ]if is present in the tree then locate , , will always terminate at a location such that is order - node of and that establishes lemma [ lem4 ] .lemma [ lem5 ] follows from lines [ lem5_1 ] and [ lem5_2 ] .line [ flag_copy ] ensures that even if the function cleanflag helps a pending remove , before it could successfully mark the right - link at line [ atomicmark ] the flag that was put on the order link at line [ algtryflag ] is copied to the changed order - link .also , the line [ setprelink ] inside the while loop ensures that prelink is always set to the order - node .that proves the correctness of lemma [ lem6 ] . when a node is added its backlinkis pointed to at line [ lem8_1 ] .before a remove operation returns it ensures that the backlinks of the predecessor , left - child and right - child , if present for the node under remove , are appropriately updated at lines [ ptr_swapping_cat1_2 ] , [ bk_update1 ] , [ ptr_swapping_cat2_2 ] , [ bk_update2 ] , [ bk_update3 ] , [ bk_update4 ] and [ cmend ] .hence , by induction , lemma [ lem8 ] is proved . in our algorithmwe always use an atomic ` cas ` to put a flag or mark - bit in a pointer .whenever a ` cas ` fails we check the possible reason .the function trymark helps cleaning the flag in all cases except when the link has direction 0 and it is threaded , line [ trymark_ret_left ] .these observations prove lemmas [ lem9 ] and [ lem10 ] .lemma [ lem11 ] follows from lemmas [ lem9 ] and [ lem10 ] .lemma [ lem11 ] proves that once the right - link of a node is marked , it can not be reversed and if the thread invoking remove to mark it becomes faulty then eventually another thread invoking a possible add or remove which gets obstructed , will help to complete the physical removal . having proved the lemmas listed above , it is trivial to observe that whenever a pointer is dereferenced it is not null . and , by the initialization of the global variable to and , at line [ gv ] , the two starting nodes are never deleted .+ hence , we state proposition [ treedef ] whose proof will follow by the above stated lemmas and the fact that a thread always takes a correct `` turn '' during traversal according to the symmetric order of the internal bst .[ treedef ] the union of the regular and logically removed nodes operated under the set operations in the algorithm efficient lock free bst maintain a valid internal binary search tree .an execution history in our implementation may consist of add , remove and contains operations .we present the linearization point of the execution of these operations .proving that a history consisting of concurrent executions of these operations is legal will be ordering these linearization points .the linearization points of the operations are as following : * add * - for a successful add operation , execution of the ` cas ` at line [ addsuc ] will be the linearization point . for an unsuccessful one the linearization point will be at line [ cmp ] wherea key in the tree is found matched . *remove * - for a successful remove operation the linearization point will be the successful ` cas ` that swaps the flagged parent link .for an unsuccessful one there may be two cases - ( a ) if the node is not located then it is treated as an unsuccessful contains and its linearization point will be accordingly ( b ) if the node is located but its order - link got flagged by another concurrent remove then its linearization point is just after the linearization point of that remove .* contains * - our algorithm varies according to the read - write load situation . in casewe go for eager helping by a thread performing locate , a successful contains shall always return a regular node .however , if we opt otherwise then a successful contains returns any non - physically removed node . in both situationsa successful contains will be linearized at line [ cmp ] .an unsuccessful one , if the node never existed in the bst , is linearized at the start point . and , if the node existed in the bst when the contains was invoked but got removed during its traversal by a concurrent remove then the linearization point will be just after the successful ` cas ` operation that physically removed the node from the bst . following the linearization pointsas described above we have proposition [ linearize ] : [ linearize ] the set operations in the algorithm efficient lock free bst are linearizable .the lemmas [ lem6 ] , [ lem9 ] , [ lem10 ] and [ lem11 ] imply the following lemma .[ rem_obstruct ] if remove and remove work concurrently on nodes and then without loss of generality a. if is the left - child of and both and are logically deleted then remove finishes before remove . b. if is the right - child of and both and are logically deleted then remove finishes before remove . c. if is the predecessor of and the order - links of both and have been successfully flagged then remove finishes before remove .d. if is the predecessor of and has been logically deleted then remove finishes before the order - link of could be successfully flagged .e. if is the left - child of the predecessor of and the incoming parent - link of has been successfully flagged then remove finishes before remove .f. if is the left - child of the predecessor of and the left - link of the predecessor of has been successfully marked then remove finishes before remove . g. in all other cases remove and remove do not obstruct each other . by the description of our algorithm, a non - faulty thread performing contains will always return unless its search path keeps on getting longer forever . if that happens , an infinite number of add operations would have successfully completed adding new nodes making the implementation lock - free .so , it will suffice to prove that the modify operations are lock - free . considering a thread performing a pending operation on a bst and takes infinite steps , and, no other modify operation completes after that .now , if no modify operation completes then the tree remains unchanged forcing to retract every time it wants to inject its own modification on .this is possible only if every time finds its injection point flagged or marked .this implies that a remove operation is pending .it is easy to observe in the function add that if it gets obstructed by a concurrent remove then before retrying after recovery from failure it helps the pending remove by taking all the remaining steps of that . also from lemma [ rem_obstruct ], whenever two remove operations obstruct each other , one finishes before the other .it implies that whenever two modify operations obstruct each other one finishes before the other and so the tree changes .it is contrary to our assumption .hence , by contradiction we show that no non - faulty thread shall remain taking infinite steps if no other non - faulty thread is making progress .this proves the following proposition [ lockfree ] .[ lockfree ] lock - freedom is guaranteed in the algorithm efficient lock free bst .having proved that our algorithm guarantees lock - freedom , though we can not compute worst - case complexity of an operation , we can definitely derive their amortized complexity .we derive the amortized step complexity of our implementation by the accounting method along the similar lines as in . in our algorithm , an add operation does not have to hold any pointer and so does not obstruct an operation .we observe that after flagging the order - link of a node , a remove operation takes only a constant number of steps to flag , mark and swap pointers connected to the node and to its predecessor , if any , in addition to setting the prelink of the node under remove .so , if a modify operation gets obstructed by then it would have to perform only a constant number of extra steps in order to help provided that during the help it is not obstructed further by another remove operation .this observation implies following lemma [ lemextrasteps ] an obstructing operation makes an obstructed operation take only a constant number of extra steps in order to finish the execution of .now for an execution , let be the number of nodes at the beginning of .let be the set of operations and let be the set of steps taken by all .considering the invocation point of to be the time it reads the root node , and response point to be the time it reads or writes at the last link before it leaves the tree , its _ interval contention _ is defined as the total number of operations whose execution overlaps the interval ] s.t .either and and are two nodes in the bst .let us define the _ access - node _ of an interval as the order - node of the node that the interval associates with .we define _ distance _ of an interval from a traversal as the number of links that traverses from its current location to read the access - node of the interval .suppose that at there are nodes in the bst .clearly , distance of the interval associated with any node for at is .it is also easy to observe that unless accesses a node , distance of any node in the subtree rooted at it is .when travels in the left subtree of a category 3 node and if it gets removed , the interval associated with it gets associated with the leftmost node in the right subtree of the node that replaces it and so the distance of that interval from changes .there is no such change for a category 1 or category 2 node .however , once an interval gets associated with the leftmost node in the right - subtree of , which is obviously a category 1 node , its distance can not become more than from a traversal that has accessed .these observations show that the path length of a traversal in our lock - free bst is bounded by , if no node is added in the traversal path . in case a node is added , the extra read cost is mapped to the concurrent add .having identified the operations to map for a step , it is easy to observe that an operation to which a step by an operation is mapped , always has its execution interval overlapping ] .we can then use a tighter notion of _ point contention _ , which counts the maximum number of operations that execute concurrently at any point in $ ] . in that case ,given the above discussion , along the similar lines as presented in , we can show that for any execution , the average amortized step complexity of a set operation in our algorithm will be where is the number of nodes in the bst at the point of invocation of and is its point contention during . that concludes the amortized analysis of our algorithm .it is straightforward to observe that the number of memory - words used by a bst with nodes in our design is .in this paper we proposed a novel algorithm for the implementation of a lock - free internal bst . using amortized analysis we proved that all the operations in our implementation run in time .we solved the existing problem of `` retry from scratch '' for modify operations after failure caused by a concurrent modify operation , which resulted into an amortized step complexity of .this improvement takes care of an algorithmic design issue for which the step complexity of modify operations increases dramatically with the increase in the contention and the size of the data structure .this is an important improvement over the existing algorithms .our algorithm also comes with improved disjoint - access - parallelism compared to similar lock - free bst algorithms .we also proposed a conservative helping technique which adapts to read - write load on the implementation .we proved the correctness showing linearizability and lock - freedom of the proposed algorithm .+ we plan to thoroughly evaluate our algorithm experimentally vis - a - vis existing concurrent set implementations .this work was supported by the swedish research council under grant number 37252706 as part of project scheme ( www.scheme-project.org ) .b. chatterjee , n. nguyen , and p. tsigas .efficient lock - free binary search trees .technical report 2014:05 , isnn 1652 - 926x , department of computer science and engineering , chalmers university of technology , 2014 .s. timnat , a. braginsky , a. kogan , and e. petrank .wait - free linked - lists .in r. baldoni , p. flocchini , and r. binoy , editors , _ principles of distributed systems _ , volume 7702 of _ lncs _ , pages 330344 .springer berlin heidelberg , 2012 .
in this paper we present a novel algorithm for concurrent lock - free internal binary search trees ( bst ) and implement a set abstract data type ( adt ) based on that . we show that in the presented lock - free bst algorithm the amortized step complexity of each set operation - add , remove and contains - is , where , is the height of bst with number of nodes and is the contention during the execution . our algorithm adapts to contention measures according to read - write load . if the situation is read - heavy , the operations avoid helping pending concurrent remove operations during traversal , and , adapt to interval contention . however , for write - heavy situations we let an operation help pending remove , even though it is not obstructed , and so adapt to tighter point contention . it uses single - word compare - and - swap ( ` cas ` ) operations . we show that our algorithm has improved disjoint - access - parallelism compared to similar existing algorithms . we prove that the presented algorithm is linearizable . to the best of our knowledge this is the first algorithm for any concurrent tree data structure in which the modify operations are performed with an additive term of contention measure . : concurrent data structures , binary search tree , amortized analysis , shared memory , lock - free , cas +
it has been discussed recently that , using krylov space solvers , the solutions of shifted linear equations , where has to be calculated for a whole set of values of , can be found at the cost of only one inversion .this kind of problem arises in quark propagator calculations for qcd as well as other parts of computational physics ( see ) .it has been realized that several algorithms allow to perform this task using only as many matrix - vector operations as the solution of the most difficult single system requires .this has been achieved for the qmr , the mr and the lanczos - implementation of the bicg method .we present here a unifying discussion of the principles to construct such algorithms and succeed in constructing shifted versions of the cg , cr , bicg and bicgstab algorithms , using only two additional vectors for each mass value .the iterates of krylov space methods , especially the residuals , are generally polynomials of the matrix applied to some initial vector .the polynomial generating the residual generally has to be an approximation to zero in some region enclosing the spectrum of while statisfying .the key to the method is the observation that shifted polynomials , defined by are useful objects , since vectors generated by these shifted polynomials can be calculated for multiple values using no additional matrix - vector multiplications .we expect if the condition number of is smaller than the one of , which is confirmed in numerical tests .generally the polynomial generated in a solver is defined by some recursion relation .we will therefore need to know the recursion relation for the shifted polynomial , too , which can be found easily by parameter matching . here, we discuss only polynomials which statisfy , but more general normalisation conditions are handled analogously .for the two - term recursion relation we find for the polynomial shifted by this formula has also been found in with different methods . from ( [ line1 ] )we can read off the parameters of the shifted polynomial , while ( [ line2 ] ) determines .note that this formula holds for any choice of .we can easily generalize this method to three - term recurrences and find explicit expressions for the parameters of the shifted polynomial .it turns out that the lanczos polynomials for the matrices and fulfill ( [ eq1 ] ) automatically ( this was the original observation in ) .we derived the shifted polynomial however for arbitrary choices of parameters and .the most interesting case are coupled two - term recurrences , since they have superior stability properties over three - term recurrences .we consider the general recurrence in -type algorithms , the parameters are chosen so that is the lanzcos - polynomial ( normalized to ) .we thus demand . by transforming the above relation to a simple three - term recurrence and applying the formulae found in this case for the shifted parameters we find it can easily be checked that , so if we want to use this recursion relation in an algorithm we have to replace in the shifted systems .since in cg - type algorithms the update of the solution vector involves , this vector has to be iterated and stored for all shifted systems .using the above formulae , we can easily derive a variety of linear system solvers as shown in table [ tab1 ] ..[tab1]memory requirements and references for shifted system algorithms for unsymmetric or nonhermitean matrices .we list the number of additional vectors neccessary for additional values of ( which is independent of the use of the -symmetry ) . [ cols="^,^,^,^,^",options="header " , ] we present here only the algorithm of greatest interest for quark propagator calculations , bicgstab - m .the bicgstab - m algorithm is a mixture between the bicg and the mr algorithm .it is therefore not surprising that we can simply use the formulae for the two - term and the coupled two - term recurrences and construct a shifted algorithm . in the bicgstab algorithm , we generate the following sequences where and are exactly the bicg - polynomials and is derived from a minimal residual condition . for the shifted algorithm we demand using the above formulae we can explicitly determine the constants and and the shifted parameters ot the polynomials .the remaining difficulty is to derive the iteration for the solution and the vector .the update of these two vectors has the form this means we have to eliminate from the update of .the updates for the shifted vectors and then look as follows : we therefore need to introduce 2 vectors for each shifted system and one additional vector to store . note that the case leads to a breakdown of the bicgstab algorithm and does not introduce any new problems for the shifted method .the convergence of the shifted algorithms can be verified by checking that .it is however generally advisable for all shifted algorithms to test all systems for convergence after the algorithm finishes since a loss of the conditions ( [ shiftc ] ) due to roundoff errors might lead to erratic convergence .the most serious limitation of the method is given by the fact that the starting residual for all systems must be the same , which excludes -dependent left preconditioning .furthermore , preconditioning must retain the shifted structure of the matrix .this means that especially even - odd preconditioning is not applicable . to stabilize the algorithm, however , one can apply polynomial preconditioning : we must have note that might not be a good preconditioner for , which is compensated for by the faster convergence of the shifted system .a linear preconditioner , which was also proposed in , is given by for the wilson and clover matrix , this polynomial has the property that is a good preconditioner for .this preconditioner has been found to work well in those cases .higher order polynomials can be derived from condition ( [ condpre ] ) .we presented a simple point of view to understand the structure of krylov space algorithms for shifted systems , allowing us to construct shifted versions of most short recurrence krylov space algorithms .the shifted cg - m and cr - m algorithm can be applied to staggered fermion calculations . since efficient preconditioners for the staggered fermion matrixare not known , a very large improvement by these algorithms can be expected .we presented the bicgstab - m method , which , among the shifted algorithms , is the method of choice for quark propagator calculations using wilson ( and presumably also clover ) fermions if enough memory is available .the numerical stability of the algorithms has been found to be good .roundoff errors might however in some cases affect the convergence of the shifted systems so that the final residuals have to be checked .other discussions can be found in .this work was supported in part by the u.s .department of energy under grant no .de - fg02 - 91er40661 .i would like to thank s. pickels , c. mcneile and s. gottlieb for helpful discussions .
we present a general method to construct multiple mass solvers from standard algorithms . as an example , the bicgstab - m algorithm is derived .
when performing optical polarimetry of astronomical objects , we wish to answer three distinct , but related , physical questions .firstly , is the object polarized at all ?secondly , if it is , what is the best estimate of the polarization ? and thirdly , what confidence can we give to this measure of polarization ?in addition to these physical questions is a presentational one : in what format should the results be published , so that they will be of most utility to the scientific community ?the questions of quantifying and presenting data on linear polarization have been discussed at length by simmons & stewart , who note that the traditional method used by optical astronomers , that of serkowski , does not give the best estimate of the true polarization under most circumstances .using their recommendations , i present here a recipe for reducing polarimetric data .in this paper , i will not consider the origin of the polarization of light .it may arise from intrinsic polarization of the source , from interaction with the interstellar medium , or within earth s atmosphere .each of these sources represents a genuine polarization , which must be taken into account in explaining the measured polarization values .most modern optical polarimetry systems employ a two - channel system , normally a wollaston prism .such a prism splits the incoming light into two parallel beams ( ` channels ' ) with orthogonal polarizations - it functions as a pair of co - located linear analyzers .the transmission axes of the analyzers can be changed either by placing a half - wave plate before the prism in the optical path , and rotating this , or by rotating the actual wollaston prism .such a system is incapable of distinguishing circularly polarized light from unpolarized light , and references to ` unpolarized ' light in the remainder of this paper strictly refer to light which is not linearly polarized ; it may be totally unpolarized ( i.e. randomly polarized ) , or may include a circularly polarized component . where a half - wave plate is used , an anticlockwise rotation of the waveplate results in an anticlockwise rotation of of the transmission axes .( for the theory of wollaston prisms and wave plates , see , for instance , chapter 8 in hecht . )we will suppose that channel 1 of the detector has a transmission axis which can be rotated by some angle anticlockwise on the celestial sphere , relative to a reference position east of north .( see figure [ figeta ] . )( 250,200)(-120,-100 ) ( 0,-75)(0,1)150 ( -3,80)n ( declination ) ( 90,0)(-1,0)165 ( -105,-3)e ( -105,-20)(right ascension ) ( 0,0)(3,-2)80 ( 90,-63) ( 40,-90)(propagation direction , to earth ) ( 20,0)(3,-2)16.64 ( 36.64,-11.09)(-1,0)20 ( 15,-10)(0,1)18.03 ( 15,8.03)(-3,2)15 ( 0,0)(-1,2)30 ( -37,65)r ( -13,32) ( 0,50)(-2,-1)20 ( -23,18) ( -20,40)(-1,-3)8.57 ( 0,0)(-2,1)70 ( -84,35)t1 ( 0,0)(-1,-2)30 ( -41,-72)t2 the transmission axes t1 , t2 , of channels 1 and 2 are hence at and respectively .the reference angle will depend on the construction of the polarizer , and will not , in general , be neatly due north . for mathematical convenience in the rest of this paper , we will take to define a reference direction , ` r ' , in our instrumental co - ordinate system and relate all other angles to it .such instrumental angles can then be mapped on to the celestial sphere by the addition of .since the light emerging in the two beams has traversed identical paths until reaching the wollaston prism , this method of polarimetry does not suffer from the systematic errors due to sky fluctuation which affect single - channel polarimetry ( where a single beam polarimeter alternately samples the two orthogonal polarizations ) .the two channels will each feed some sort of photometric array , e.g. a ccd , which will record a photon count . sincesuch images are usually built up by a process of shifting the image position on the array and combining the results , we will refer to a composite image taken in one transmission axis orientation , , as a _mosaic_. we will denote the rate of arrival of photons recorded in channel 1 and channel 2 by and respectively . from these rates , we can calculate the total intensity ( ) of the source , and the difference ( ) between the two channels : we can also define a _normalized _ difference : the purpose of this paper is to discuss how to interpret and present such data .suppose we have a beam of light , which has a linearly polarized component of intensity , whose electric vector points at an angle anticlockwise of r. its unpolarized component is of intensity .when such a beam enters our detector , we can use malus law to deduce that and from which we find and , less trivially , .\label{spol}\ ] ] the _ degree of linear polarization _ , , is defined by and so we can obtain the normalized difference by substituting equations [ ipol ] , [ spol ] and [ ppol ] into [ normdiff ] : .\ ] ] now , if observations have been made at a number of different angles , , of the transmission axis , then a series of values for and will be known , and and may be determined by fitting a sine curve to this data , weighted by errors as necessary .this method has been used , for example , by di serego alighieri _ et al .( their refinement of the method allowed for the correction of the for instrumental polarization at each , which was necessary as they were rotating the entire camera , their system having no half - wave plate . )we note that if there is any systematic bias of channel 1 compared to channel 2 , this will show up as an -independent ( dc ) term added to the sinusoidal component when is fitted to the data .such bias could arise if an object appears close to the edge of the ccd in one channel , for example .polarized light is normally quantified using stokes parameterisation .( for basic definitions see , for example , clarke , in gehrels ( ed . ) . )four variables are used , but one , , is only applicable to circular polarization , which a system involving only half - wave plates and linear analyzers can not measure .the total intensity , , of the light is an absolute stokes parameter .the other two parameters are defined relative to some reference axis , which in our case will be r , the direction .thus we define : and _ normalized _ stokes parameters are denoted by lower case letters ( ,, ) , and are found by dividing the raw parameters by .we note that and the normalized can be thought of as a stokes parameter like or , generalised to an arbitrary angle - and results which can be derived for ( or ) will apply to and ( or and ) as special cases .if the stokes parameters are known , then the degree and angle of polarization can be found : where the signs of and must be inspected to determine the correct quadrant for the inverse tangent .note that , and _ must _ be defined as above to be consistent with the choice of r as reference .we must now distinguish between the true values of the stokes parameters for a source , and the values which we measure in the presence of noise .we will use the subscript to denote the underlying values , and the subscript for individual measured values . in particular , consider a source which is not polarized , so , , and is undefined .since the and include noise , they will not , in general , be zero , and because of the form of equation [ defp ] , will be a definite - positive quantity . in short , is a _ biased _estimator for .there is no known _ unbiased _estimator for , and simmons & stewart discuss at length the question of which estimator should be used .they conclude that the stokes parameters themselves are more useful than and in many applications , and it is recommended , therefore , that all published polarimetric data should ideally give the normalized stokes parameters , with or without evaluation and discussion of and .given this preference for the stokes parameters it appears that one should eschew the curve fitting method in favour of direct evaluation of the parameters , at least when we only have data for the usual angles .in practice , observers will take several observations of an object at each transmission angle .this raises the question of how best to combine all the measured values to yield a single pair of ` best estimators ' for and a question which is dealt with by clarke _et al . _on the basis of this prior work and set of recommendations , it is now possible to present a ` recipe ' for reducing polarimetric data .the raw numbers which our photometric system produces will be a set of photon count rates and , together with their errors , and .errors arise from three sources : photon shot noise ; pixel - to - pixel variations in the sky value superimposed on the target object ; and imperfect estimation of the modal sky value to subtract from the image .the photons arrive at the detector according to a poisson distribution .let the total integration time for a mosaic taken at a given rotation of the polarizer , be . if the detector requires photons to arrive in order for one ` count ' to be registered , then the total number of photons incident to produce the measured signal is . under poisson statistics , using units of ` numbers of photons ', the standard deviation of the number of photons arriving in this time - bin is the square root of the mean number arriving , _ viz . _ . in our detector - based count rate units , therefore , the error contributed is .provided , then the shot noise will be normally distributed , to a good approximation . the modal value of a sky pixel , can be found by considering , say , the pixel values in an annulus of dark sky around the object in question , an annulus which contains pixels altogether .the root - mean - square deviation of these pixels values about the mode can also be found , and we will label this , . hence we can estimate the error on the mode , . if we perform aperture - limited photometry on our target , with an aperture of area in pixels , we must subtract the modal sky level , , which will introduce an error .each individual pixel in the aperture will be subject to a random sky fluctuation ; adding these in quadrature for each of the pixels , we obtain an error .ultimately , the error on the measured , normalized , intensity , is the sum in quadrature of the three quantities , , , and .if the areas of the aperture and annulus are comparable , then both the second and third terms will be significant ; in practice , for long exposure times , the first ( shot ) noise term will be much smaller and can be neglected .this is important as , unlike the sky noise , the shot noise depends on the magnitude of the target object itself .if its contribution to the error terms is negligible , then sky - dominated error terms can be compared between objects of different brightness on the same frame .[ step]data check [ smallshot ] for each object observed in each channel of each mosaic , the photometry system will have produced a count rate with an error , .for each such measurement , calculate and verify that it is much less than . then one can be certain that the noise terms are dominated by sky noise rather than shot noise . in practice , for each target object , we will have taken a number of mosaics at each angle .we can immediately use each pair of intensities to find and using equations [ idef ] and [ sdef ] .since the errors on the two channels are independent , we can trivially find the errors on both and ; the errors turn out to be identical , and are given by : [ cbias ] take the mean value of all the by summing over all the values at all angles ; and obtain an error on this mean by combining in quadrature the error on each .if the mean value of , averaged over all the angles , is significantly greater than the propagated error , then there may be some dc bias .check [ cbias ] uses as a measure of excess intensity in channel 1 over channel 2 , and relies on the fact that there are similar numbers of observations at and to average away effects due to polarization .if , as may happen in real data gathering exercises , there are not _ identical _ numbers of observations at and , this could show up as apparent ` dc bias ' in a highly polarized object . in practice , however , we are unlikely to encounter this combination of events ; testing for bias by the above method will either reveal a bias much greater than the error ( where the cause should be obvious when the original sky images are examined ) ; or a bias consistent with the random sky noise , in which case we can assume that there is no significant bias . once we are satisfied that our raw data are not biased , we can proceed . at this stage in our data reduction, we will find it convenient to divide our set of values , together with their associated values , into the named stokes parameters , and in the rest of this paper , symbols such as and , where not followed by , can be read as denoting ` either or ' , ` either or ' , etc .. [ getthei ] for each pair of data , produce the sum , , and the difference , or as appropriate . using equation [ erreq ] , produce the error common to the sum and difference , or . also find the normalized difference , or . in practice , for a given target object , we will have taken a small number of measurements of and say and respectively with individual errors obtained for each measurement .if the errors on the individual values are not comparable , but vary widely , we may need to consider taking a weighted mean .[ maxbig ] for a set of measurements of , take all the measured errors , ; and so find the mean error ( call this ) and the maximum deviation of any individual error from . if the maximum deviation is large compared to the actual error , consider whether you need to weight the data . if the deviations are large , we can weight each data point , , by ; but we will not pursue the subject of statistical tests on weighted means here . in practice , one normally finds that the noise does not vary widely between measurements .we have already checked ( see check [ smallshot ] ) that the shot noise is negligible compared with the sky noise terms . therefore, the main source of variation will be the sky noise .if the maximum deviation of the errors from is small , then we can infer that the fluctuation in the sky pixel values is similar in all the mosaics .[ assumenorm ] in order to carry the statistical treatment further , we must assume that the sky noise is normally distributed .this is standard astronomical practice .[ getmean ] from the sample of stokes parameters , and , obtained in step [ getthei ] , find the two means , and , with their corresponding intensities and ; and find the standard deviations of the two * samples * , and . since modern photometric systems can estimate the sky noise on each frame , we are faced throughout our data reduction sequence with a choice between two methods for handling errors .we can propagate the errors on individual measurements through our calculations ; or we can use the standard deviation , , of the set of sample values , . in this paper ,i use the symbol to denote the measured ( sky - dominated ) error on , and for the standard error on the estimated mean , .the standard deviation of the population , which is the expected error on a single measurement , could be denoted , but above i used to make its photometric derivation obvious .using statistical estimators discards the data present in the photometric noise figures and uses only the spread in the data points to estimate the errors .we would expect the statistical estimator to be of similar magnitude to the photometric error in each case ; and a cautious approach will embrace the greater of the two errors as the better error to quote in each case . because we may be dealing with a small sample ( size ) for some stokes parameter , , the standard deviation of the sample , , will not be the best estimator of the population standard deviation .the best estimator is ( * ? ? ?* , for example ) : in this special case of the * population * standard deviation , i have used the notation for clarity .conventionally , is used for the ` best estimator ' standard deviation , but this symbol is already in use here for a general normalized stokes parameter , so in this paper i will use the variant form of sigma , , for errors derived from the sample standard deviation , whence , and the ( statistical ) standard error on the mean is the mean value of our stokes parameter , , is the best estimate of the true value regardless of the size of .given a choice of errors between and , we will cautiously take the greater of the two to be the ` best ' error , which we shall denote .[ noiseok ] we now have two ways of estimating the noise on a single measurement of a stokes parameter : is the mean sky noise level obtained from our photometry system : check [ maxbig ] obtains its value and verifies that the noise levels do not fluctuate greatly about this mean . fluctuations in the actual values of the stokes parameter in question are quantified by , obtained by applying equation [ stateq ] to the data from step [ getmean ] .we would expect the two noise figures to be comparable , and this can be checked in our data .we may also consider photometry of other objects on the same frame : check [ smallshot ] shows us that the errors are dominated by sky noise , and should be comparable between objects , correcting for the different apertures used : we therefore take the best error , , on a stokes parameter , , to be the greater of and .if our data passes the above test , then we can be reasonably confident that the statistical tests we will outline in the next sections will not be invalidated by noise fluctuations .the linear polarization of light can be thought of as a vector of length and phase angle .there are two independent components to the polarization .if either or is non - zero , the light is said to be polarized .conversely , if the light is to be described as unpolarized , both and must be shown to be zero . the simplest way to test whether or not our target object emits polarized light is to test whether the measured stokes parameters , and , are consistent with zero .if either parameter is inconsistent with zero , then the source can be said to be polarized . to proceed , we must rely on our assumption ( step [ assumenorm ] ) that the sky - dominated noise causes the raw stokes parameters , to be distributed normally. then we can perform hypothesis testing ( * ? ? ?* chapters 12 and 16 ) for the null hypotheses that and are zero . here , noting that the number of samples is typically small ( ) we face a choice : : assume that the sky fluctuations are normally distributed with standard deviation , and perform hypothesis testing on the standard normal distribution with the statistic : : use the variation in the values to estimate the population standard deviation , and perform hypothesis testing on the student s distribution with degrees of freedom , using the statistic : in either case , we can perform the usual statistical test to determine whether we can reject the null hypothesis that ` ' , at the confidence level .the confidence intervals for retaining the null hypothesis will be symmetrical , and will be of the forms and . the values of and can be obtained from tables , and we define to be the greater of and . then the more conservative hypothesis test will reject that null hypothesis at the confidence level when .in such a confidence test , the probability of making a ` type i error ' , i.e. of identifying an * unpolarized * target as being polarized in _one _ polarization sense , is simply .the probability of correctly retaining the ` unpolarized ' hypothesis is .the probability of making a ` type ii error ' ( i.e. not identifying a * polarized * target as being polarized in one polarization sense ) is not trivial to calculate . nowbecause there are two independent senses of linear polarization , we must consider how to combine the results of tests on the two independent stokes parameters .suppose we have a source which has no linear polarization .we test the two stokes parameters , and , for consistency with zero at confidence levels and respectively .the combined probability of correctly retaining the null hypothesis for both channels is , and that of making the type i error of rejecting the null hypothesis in either or both channels is .hence the overall confidence of the combined test is .since the null hypothesis is that and is undefined , there is no preferred direction in the null system , and therefore the confidence test should not prefer one channel over the other .hence the test must always take place with .even so , the test does not treat all angles equally ; the probability of a type ii error depends on the orientation of the polarization of the source .clearly if its polarization is closely aligned with a transmission axis , there is a low chance of a polarization consistent with the null hypothesis being recorded on the aligned axis , but a much higher chance of this happening on the perpendicular axis . as the alignment worsens , changing while keeping constant , the probabilities for retaining the null hypothesis on the two measurement axes approach one another .consider the case where we have taken equal numbers of measurements in the two channels , so , and where the errors on the measurements are all of order .hence we can calculate for the null hypothesis as above. its value will be common to the and channels , as the noise level and the number of measurements are the same in both channels .now suppose that the source has intensity and a true non - zero polarization oriented at position angle .then we can write , and . to generate a type ii error ,a false null result must be recorded on both axes .the probability of a false null can be calculated for specified and : defining then the probability of such a type ii error is dx\,dy.\ ] ] clearly this probability is not independent of .find the 90% confidence region limits , and , and inspect whether and . both stokes parameters fall within the limits , then the target is not shown to be polarized at the 81% confidence level . in this casewe can try to find polarization with some lower confidence , so repeat the test for .if the null hypothesis can be rejected in either channel , then we have a detection at the 72.25% confidence level .there is probably little merit in plumbing lower confidences than this . , however , polarization is detected in one or both of the stokes parameters at the starting point of 90% , test the polarized parameters to see if the polarization remains at higher confidences , say 95% and 97.5% .the highest confidence with which we can reject the null ( unpolarized ) hypothesis for either stokes parameter should be squared to give the confidence with which we may claim to have detected an overall polarization . in our hypothesis testing , we have made the _ a priori _ assumption that all targets are to be assumed unpolarized until proven otherwise .this is a useful question , as we must ask whether our data are worth processing further and we ask it using the raw stokes parameters , without resorting to complicated formulae . to publish useful results , however, we must produce the normalized stokes parameters , together with some sort of error estimate , and it is this matter which we will consider next .consider a general normalized stokes parameter for some angle , : clarke _ et al . _ point out that the signal / noise ratio obtained by calculating is much better than that obtained by simply taking the mean , since the equation [ stilde ] involves the taking of only one ratio , where the two terms and have better signal / noise ratios than the individual and which are ratioed in equation [ sbar ] .we also note that errors on and on are not independent of one another .we can write : we propagate through the errors on the intensities , we find : ^ 2 + [ ( 1+\tilde{s})\sigma_{\bar{n}_{2}}]^2}.\ ] ] in order to simplify the calculation , we recall that in check [ maxbig ] , we checked that the errors on all the ( and hence ) were similar .thus the mean error on _ one _ rate in _one _ channel is .since the number of measurements made of is , then and the error formula approximates to : in practice , we will be dealing with small polarizations , so , and knowing from equation [ stilde ] , then equation [ normerr ] approximates to : as we had before with and , so now we have a choice of using sky photometry or the statistics to estimate errors .the above method gives us the photometric error on a normalized stokes parameter as ; the statistical method would be to take the root - mean - square deviation of the measured , obtained in step [ getthei ] , about clarke _s best estimator value , : ^{{{\frac{1}{2}}}}\ ] ] following the method outlined for finding and , apply equations [ stilde ] and [ simerrs ] to the data obtained in step [ getmean ] to obtain with and with .[ stoeq ] using and , compute and ; find for both normalized stokes parameters , and compare it with in each case .verify also that the errors , , on the population standard deviations for the two stokes parameters are similar this should follow from the -independence of equation [ simerrs ] for small and .so which error should one publish as the best estimate , , on our final or ? again , a conservative approach would be to take the greater of the two in each case .[ gotnorm ] choose the more conservative error on each normalized stokes parameter , and record the results as and .record also the best population standard deviations , and .having obtained estimated values for and , with conservative errors , these values together with the reference angle can and should be published as the most convenient form of data for colleagues to work with .it is often desired , however , to express the polarization not in terms of and , but of and .simmons & stewart discuss in detail the estimation of the degree of linear polarization .their treatment assumes that the _ normalized _stokes parameters have a normal distribution , and that the errors on and are similar .this latter condition is true for small polarizations ( see check [ stoeq ] ) , but before we can proceed , we must test whether the former condition is satisfied . if one assumes ( step [ assumenorm ] ) that and are normally distributed , one can construct , following clarke _ et al ._ , a joint distribution for whose parameters are the underlying _ population _ means and standard deviations for the photon rates and .the algebra gets a little messy here , so we define three parameters , : ,\ ] ] ,\ ] ] .\ ] ] using these three equations , we can write the probability distribution for as : }{\sigma_1.\sigma_2.\sqrt{\pi.\alpha^3}.(1+s)^2}.\ ] ] this can be compared to the limiting case of the normal distribution whose mean and standard error are obtained by propagating the underlying means and standard deviations through equations [ getstilde ] and [ getsterr ] : }{\sigma_0.\sqrt{2\pi}};\ ] ] we can derive an expression for the ratio , which should be close to unity if the normalized stokes parameter , , is approximately normally distributed .[ nearnormal ] and using equations [ idef ] and [ sdef ] , and the data from step [ getmean ] .estimate , where is obtained from check [ noiseok ] . the values of and obtained in step [ gotnorm ] as the best estimates of and . use a computer program to calculate and plot in the domain .if r(s ) is close to unity throughout this domain , then we may treat the normalized stokes parameters as being normally distributed .if the data passes checks [ stoeq ] and [ nearnormal ] , then we can follow the method of simmons & stewart .they ` normalize ' the intensity - normalized stokes parameters , and , by dividing them by their common population standard deviation , .for clarity of notation , in a field where one can be discussing both probability and polarization , i will recast their formulae , such that the _ measured _ degree of polarization , normalized as required , is here given in the form ; and the _ actual _ ( underlying ) degree of polarization , also normalized , is .it follows from the definition of ( equation [ defp ] ) that if , then .now , simmons & stewart consider the case of a ` single measurement ' of each of and , whereas we have found our best estimate of these parameters following the method of clarke _ et al . _however , we can consider the whole process described by clarke _et al . _ as ` a measurement ' , and so the treatment holds when applied to our best estimate of the normalized stokes parameters , together with the error on that estimate .[ findperr ] find , and hence , by substituting our best estimates of and and their errors ( step [ gotnorm ] ) ino equation [ errp ] .hence calculate : the probability distribution of obtaining a measured value , , for some underlying value , , is given by the rice distribution , which is cast in the current notation using the modified bessel function , ( * ? ? ?* as defined in ch.12 , 17 ) : .i_0(ma ) \ldots ( m\geq 0)\ ] ] simmons & stewart have tested various estimators for bias .they find that when , the best estimator is the ` maximum likelihood estimator ' , , which maximises with respect to .so is the solution for of : if then the solution of this equation is . when , the best estimator is that traditionally used by radio astronomers , e.g. wardle & kronberg . in this case, the best estimator , , is that which maximises with respect to m , being the solution for of : if then the solution of this equation is .simmons & stewart graph for both cases , and so show that is a monotonically increasing function of , and that .but which estimator should one use ?under their treatment , the selection of one of these estimators over the other depends on the underlying value of ; they point out that there may be good _ a priori _ reasons to assume greater or lesser polarizations depending upon the nature of the source .if we do not make any such assumptions , we can use monotonicity of and the inequality , to find two limiting cases : be the solution of the wardle & kronberg equation ( [ wkest ] ) for with . hence if , then and the maximum likelihood estimator is certainly the most appropriate .calculating , we find and so the maximum likelihood estimator will in fact be zero . be the solution of maximum likelihood equation ( [ mlest ] ) for with .we find . hence if , then , and wardle & kronberg s estimator will clearly be the most appropriate . between these two extremes, we have .this presents a problem , in that each estimator suggests that its estimate is more appropriate than that of the other estimator . if our measured value is , what should we take as our best estimatewe could take the mean of the two estimators , but this would divide the codomain of into three discontinuous regions ; there might be some possible polarization which this method could never predict !it would be better , then , to interpolate between the two extremes , such that in the range , if we do not know , _ a priori _ , whether a source is likely to be unpolarized , polarized to less than 1% , or with a greater polarization , then would seem to be a reasonable estimator of the true noise - normalized polarization , and certainly better than the biased . [ esta ] use the above criteria to find , and hence obtain the best estimate , , of the true polarization of the target . as well asa point estimate for , we would like error bars .the rice distribution , equation [ rice ] , gives the probability of obtaining some given , and can , therefore , be used to find a confidence interval for the likely values of given .we can define two functions , and , which give the lower and upper confidence limits for , with some confidence ; integrating the rice distribution , these will satisfy : and such that such confidence intervals are non - unique , and we need to impose an additional constraint .we could require that the tails outside the confidence region be equal , , but following simmons & stewart , we shall require that the confidence interval have the smallest possible width , in which case our additional constraint is : = f[{{\mathcal{l}}(a)},a].\ ] ] from the form of the rice distribution , and will be monotonically increasing functions of , as shown in figure [ ricefig ] .given a particular underlying polarization , the confidence interval can be obtained by numerically solving equations [ ldef ] thru [ aconst ] to yield and .now , it can be shown ( * ? ? ?* ch.viii , 4.2 ) that the process can also be inverted , i.e. if we have obtained some measured value , then solving for will yield a confidence interval , such that the confidence of lying within this interval is . since the contours for and cut the -axis at non - zero values of , we must distinguish three cases , depending on whether or not lies above one or both of the intercepts .the values of and depend only on the confidence interval chosen ; substituting into equations [ ldef ] thru [ aconst ] results in the pair of equations - \exp \left [ -\frac{{{\mathcal{u}}(0)}^2}{2 } \right]\ ] ] and = { { \mathcal{u}}(0)}.\exp \left [ -\frac{{{\mathcal{u}}(0)}^2}{2 } \right].\ ] ] a numerical solution of this pair of equations can be found for any given confidence interval , ; we find that , in 67% interval , , while in a 95% interval , .hence , knowing , and having chosen our desired confidence level , we can determine the interval by the following criteria : there are non - zero solutions for both and . in this case , , and we must solve . here , .simmons & stewart note that the third case is formally a confidence interval of zero width , and suggest that this is counter - intuitive ; and they go on to suggest an _ad hoc _ method of obtaining a non - zero interval .however , it is perfectly reasonable to find a finite probability that the degree of polarization is identically zero : the source may , after all , be unpolarized .this can be used as the basis of estimating the probability that there is a non - zero underlying polarization , as will be shown in the next section .[ getint ] knowing from step [ findperr ] , find the limits appropriate to confidence intervals of 67% and 95% . hence ,multiplying by , find the confidence intervals on the estimated degree of polarization .the 67% limits may be quoted as the ` error ' on the best estimate .consider the contour on figure [ ricefig ] .as defined by equation [ udef ] and the inversion of mood _ et al . _ , it divides the domain into two regions , such that there is a probability of the underlying polarization being greater than .there is clearly a limiting case where the contour cuts the -axis at , hence dividing the domain into the polarized region with probability , and the unpolarized region with probability .now we may substitute the rice distribution , equation [ rice ] , into equation [ udef ] and evaluate it analytically for the limiting case , : equation [ propol ] hence yields the probability that a measured source actually has an underlying polarization .[ estpolun ] substitute from step [ findperr ] into equation [ propol ] .hence quote the probability that the observed source is truly polarized .it remains to determine the axis of polarization , for which an unbiased estimate is given by equation [ phidef ] .once again , we have a choice of using the statistical or photometric errors and , indeed , a choice of raw or normalized stokes parameters . since our first problem is to obtain the best figure for .now , as we saw in our discussion of the best normalized stokes parameter , it is better to ratio a pair of means than to take the mean of a set of ratios . we could take , but for a very small sample, there is the danger that the mean intensity of the observations will differ from that of the values .therefore , we should use the normalized stokes parameters , and the least error prone estimate of the required ratio will be , yielding . knowing the errors on and , we can find the propagated error in : given the non - linear nature of the tan function , the error on should be found by separately calculating and .careful attention must be paid in the case where the error takes the phase angle across the boundary between the first and fourth quadrants , as the addition of to the inverse tangent may be neccessary to yield a sensible error in the phase angle .[ propphi ] obtain , the best estimate of , and the propagated error on it , latexmath:[$\sigma_{\tilde{\phi } } = { { \frac{1}{2}}}(|\sigma_+| + to and hence quote the best estimate of the polarization orientation in true celestial co - ordinates . for the statistical error, we note that the probability distribution of observed _ phase _ angles , , calculated by vinokur , and quoted in wardle & kronberg , is : .\ ] ] + \frac{a \cos(\theta-\theta_0)}{\sqrt{2\pi}}.\left\ { { { \frac{1}{2}}}+ f[a \cos(\theta-\theta_0 ) ] \right\ } \right\}\ ] ] where and is the error function as defined in boas , ch.11 , 9 .we do not know , and will have to use our best estimate , , as obtained from step [ esta ] . the confidence interval on the measured angle , ,is given by numerically solving in this case we choose the symmetric interval , .[ findangle ] obtain the limiting values of for confidence intervals of 67% and 95% .quote the 67% limits as .choose the more conservative error from and as the best error , .it may be instructive to note how the process of reducing polarimetric data outlined in this paper compares with the methods commonly used in the existing literature .the paper by simmons & stewart gives a thorough review of five possible point estimators for the degree of polarisation .one of these methods is the trivial as an estimator of .the other four methods all involve the calculation of thresholds : if then . these four methods are the following : 1 .maximum likelihood : as defined above , is the value of which maximises with respect to .hence is the solution for of equation [ mlest ] .the limit is found by a numerical method .median : fixes the distribution of possible measured values such that the actual measured value is the _, hence .the threshold is , being the solution of .3 . serkowski s estimator : fixes the distribution of possible measured values such that the actual measured value is the _mean _ , hence .the threshold is .wardle & kronberg s method : as defined above , the estimator , , is that which maximises with respect to ( see equation [ wkest ] ) , and .simmons & stewart note that although widely used in the optical astronomy literature , serkowski s estinator is not the best for either high or low polarizations ; they find that the wardle & kronberg method commonly used by radio astronomers is best when , i.e. when the underlying polarization is high and/or the measurement noise is very low .the maximum likelihood method , superior when ( i.e. in ` difficult ' conditions of low polarization and/or high noise ) , appears to be unknown in the earlier literature . in this paper , i have merely provided an interpolation scheme between the point estimators which they have shown to be appropriate to the ` easy ' and ` difficult ' measurement regimes .the construction of a confidence interval to estimate the error is actually independent of the choice of point estimator , although ( as mentioned above ) i believe that simmons & stewart s unwillingness to ` accept sets of zero interval as confidence intervals ' is unfounded , since physical intuition allows for the possibility of truly unpolarised sources ( i.e. with identically zero polarizations ) , and their arbitrary method of avoiding zero - width intervals can be dispensed with .the reduction of polarimetric data can seem a daunting task to the neophyte in the field . in this paper ,i have attempted to bring together in one place the many recommendations made for the reduction and presentation of polarimetry , especially those of simmons & stewart , and of clarke _ et al .in addition , i have suggested that it is possible to develop the statistical technique used by simmons & stewart to obtain a simple probability that a measured object has non - zero underlying polarization .i have also suggested that there is a form of estimator for the overall degree of linear polarization which is more generally applicable than either the maximum likelihood or the wardle & kronberg estimators traditionally used , and which is especially relevant in cases where the measured data include degrees of polarization of order 0.7 times the estimated error .modern computer systems can estimate the noise on each individual mosaic of a sequence of images ; this is useful information , and is not to be discarded in favour of a crude statistical analysis .a recurring theme in this paper has been the comparison of the errors estimated from propagating the known sky noise , and from applying sampling theory to the measured intensities .bearing this in mind , i have presented here a process for data reduction in the form of [ findangle ] rigorous steps and checks .the recipe might be used as the basis of an automated data reduction process , and i hope that it will be of particular use to the researcher automated or otherwise who is attempting polarimetry for the first time .
many different methods exist for reducing data obtained when an astronomical source is studied with a two - channel polarimeter , such as a wollaston prism system . this paper presents a rigorous method of reducing the data from raw aperture photometry , and evaluates errors both by a statistical treatment , and by propagating the measured sky noise from each frame . the reduction process performs a hypothesis test for the presence of linear polarization . the probability of there being a non - zero polarization is obtained , and the best method of obtaining the normalized stokes parameters is discussed . point and interval estimates are obtained for the degree of linear polarization , which is subject to positive bias ; and the polarization axis is found . [ 2]|#1_#2%
differently of most polymers , each natural protein folds over itself in a specific structural conformation , its native structure .the series of events that drive a polypeptidic chain into its native structure , the folding process , is not yet fully understood : protein systems involve many complex interactions , and presents several remarkable properties that seems to require new experiments , theoretical , and computational approaches .although the folding process is surprisingly quick , folding rates of different proteins can span several orders of magnitude ( is a measure of how fast the folding process leads the chain from the unfolded state up to the native structure ) , and this remains true even for proteins of approximately the same size .moreover , even for a single domain , two states , small proteins , existing theories for the kinetics of folding can not quantitatively predict this experimental observation probably because the folding mechanisms have been routinely proposed from their ensemble - averaged properties , and from conflicting interpretation of its fundamentals . hence , the alternative ideas and hypotheses about folding explain only partially the phenomenon . for instance , the concept of transition state explain satisfactorily the two state kinetics but not the folding reaction rates , while that the funnel landscape idea can give insights about the folding rates but not for the two state kinetics .a remarkable characteristic of the folding process is its robustness , which can be illustrated by two intriguing properties : first , one finds that the folding is similarly processed in a large temperature range , covering about 100 ; and second , all functional proteins of all organisms , which live in the most different environments , fold correctly , and are stable about a particular ideal temperature . indeed, living organisms are found in extreme conditions : some live in environments with temperatures near to freezing water , while others are found in places with temperatures of boiling water .therefore , the search mechanism must work properly in the temperature interval from about zero to about 100 , while that for each alive species the range of functional temperature is in general relatively much smaller . proteins also present an extraordinarily precise and fast self - organization process .they fold some ten orders of magnitude faster than the predicted rate of a random search mechanism ; it is as if each protein had been designed to fold as fast as possible .indeed , the probability of finding a fast - folding sequence , choosing it randomly from the set of all possible sequences , is very small .however , there is also a physiological reason for fast folding : because do not have enough chaperone molecules to support folding of every protein ( anyway chaperones also are proteins ) , they must fold very rapidly in order to avoid aggregation due to exposing hydrophobic areas of their surface for too long .globular proteins can be considered as independent nanomachines .this particularity , in combination with the nature of most currently available experimental data , can be considered as one of the sources of certain inadequate views of the folding problem .the stable appearance and the properties of homogeneous macroscopic objects are resulting from the average activity of a very large number of atoms , but , contrasting with this scenario , nanostructures like colloidal particles or proteins , in contact with a thermal reservoir ( the solvent ) , experience thermal fluctuations in a special way .actually , local unbalanced forces shake and deform continuously each of such nanostrucutures , which can not be revealed by most of available data about protein kinetics that just reflects the collective behavior of a huge number of them in dilute aqueous solutions .that is , the result is a kind of temporal averaged view of the phenomenon .nevertheless , new data and ideas start to emerge from single molecule experiments , such as about transition paths at equilibrium which is only observable for single molecules , allowing to obtain crucial mechanistic information , for instance , folding and unfolding rates .part of such general properties may be better understood if one considers that the search mechanism is governed mainly by the hydrophobic effect , whose strength , as shown experimentally at least for small hydrophobic molecules , varies slightly in the temperature interval from about zero to 100 .therefore , it is suggested that folding process should be composed of two temporal steps : the search mechanism , as the first stage , followed by the overall stabilization that only begins with the chain close enough to its native conformation , when energy and structural requirements , as encoded in the residue sequence , would be associated in a productive and cooperative way . based in these general properties ,a few hypotheses can be formulated ; therefore we assume as general grounds for the folding problem , the following three statements : - the complete folding process is composed by two timely independent steps , namely : the search mechanism , and the overall productive stabilization .- for typical one domain , two state globular proteins the folding instructions encoded in the residues sequence provide a folding kinetic as fast as possible ; and - at nanoscale dimensions randomness emerges as a peculiar attribute of the protein molecule , which should be treat individual and appropriately with respect to the effects of local thermal fluctuations .our goal in this work is to show evidences concerning the importance of local thermal fluctuations on the kinetics of the folding process of globular proteins .the simplified model employed here ( next section ) focuses exclusively on the search mechanism and have as its grounds the hydrophobic effect .fluctuation effects on a nanoscale structure are treated in the context of the nonextensive statistical mechanics ( section iii ) , and are analyzed in details through the dependence of the folding characteristic time on the temperature and nonextensive parameter ( section iv ) .the behavior of small , single - domain globular proteins are used here as ideal prototypes ; usually many of them fold via an all - or - nothing process , that is , without detectable intermediates .comments and conclusions ( section v ) are formulated according the three hypotheses stated above .the model presented here is based on the first hypothesis , stated in previous section .it is devoted just for the first stage of the folding process , the search mechanism , in order to explore general aspects of the folding problem valid , in principle , for all proteins . therefore a lattice model is used : effective residues ( a chain of 27 beads ) , occupying consecutive and distinct sites of a three - dimensional infinity cubic lattice , represent a single protein - like chain in solution ; effective solvent molecules , which explicitly interact with the chain , fill up the lattice vacant sites .the general scheme to explore the configurational space presumes that , during the simulation , solvent molecules and chain units exchange their respective sites so that all sites of the lattice remain always fully filled . for each configurational change , only the transfer free energy ( variations on the hydrophobic energy ) is taken into account given that the model is conceived to deal specifically with the search mechanism ; solvent - solvent and residue - residue interactions are represented by hard core - type interactions ( excluded volume ) . for a regular cubic lattice , which in the present case means uniform solvent density ,this interaction scheme is exactly equivalent to the use of additive , first neighbor , inter - residue pairwise potentials , namely , where is the hydrophobic level of the residue in the chain sequence .residues are taken from a repertory of ten distinct units ( a ten - letter alphabet ) , which are characterized by distinct hydrophobic levels and a set of inter - residue steric specificities .the hydrophobic levels has been considered the most general and influential chemical factor acting along the folding process , while the set of inter - residues constraint mimics steric specificities of the real residues .these specificities are achieved through the specification of which pairs of residues are allowed to get closer , as first neighbors , and its main consequence is to select folding and unfolding pathways through the configurational space .the set of inter - monomer constraints is fixed for each monomer pair , that is , it does not depend on the particularities of the native structure .the configurational energy ) ] of first neighbor inter - residue contacts is )=\sum_{\{i , j\}}(h_{i , j}+c_{i , j})\delta_{(i , j),[\kappa , l ] } , \label{0}\ ] ] where the sum runs over the set of all residues pairs ; the factor }=1 ] ; otherwise }=0 ] one finally gets the decay histogram of the number of unfolded proteins as a function of the mc time .these data are then fitted by one ( or more ) exponential function , giving the specific characteristic folding time for that structure .the simulations are carried out in a given range of temperatures for several values of the nonextensive parameter , and for distinct native structures . each native structureis characterized by their topological complexities , which can roughly be estimated by its structural contact order . as an encompassing survey, table shows in function of the temperature and nonextensive parameter , in the interval and .three representative target ( native ) structures ( identified as i d 866 ; 1128 and 36335 ) were used ; in general , depends on the structure complexity , and is a continuous , convex function of and .a total of independent runs were used for each pair .the structure i d 36335 presents higher topological complexity than the others two , a fact reflected in its larger .for any temperature , there is a specific let us say that minimizes , that is , as emphasized in table by shaded cells ; better approximations can be achieved by extra refinement of .the uncertainty in was estimated by the standard deviation of the mean of means , considering distinct samples taken from an extended set of independent runs . the uncertainty depends on the pair the smallest uncertainties occur for those specific values which minimize the characteristic folding time .on the other hand , when the system approaches the glassy regime ( the time spent in metastable states increases substantially , and so is strongly influenced by the size of the set of independent runs .this scenario suggests that the kinetic of the search mechanism is equally reproduced , not mattering if the configurations are relatively weighted by mean of the generalized boltzmann factor , namely , ( see eq .( [ 2 ] ) ) , or by the conventional boltzmann factor , with the system temperature increased by some amount with respect to .this can be seen clearly if , for each the behavior of is plotted as a function of the translated temperature scale , as shown in figure [ figure2 ] for structure i d 1128 ; is the temperature in that approaches for that value of .essentially all curves behave in the same way about a more detailed examination shows that the distributions of folding times are essentially the same in both approaches , that is , using or figure [ figure3 ] shows for structure i d for different values of ; much more independent runs were employed for each case . in general , the distributions are better fitted with one or more lognormal curves , depending on the temperature . for ,the system approaches the glassy regime with manifestation of ergodic difficulties , as indicated by the three peaked curve ; figure [ figure3 ] , open , smaller circles .as increases from the domain of is accordingly reduced ; at the distribution presents the smallest domain , namely , and as increases from this point its behavior is reverted : the size of the domain starts to increase again and the curve s peak moves in the direction of larger . in the region of the smaller folding times , namely for , all curves present the same behavior , if is restricted in the interval .the meaning for this is : even with the temperature 25% higher than , there exist some configurations among the initial open ones , which combined with certain configurational evolution , can lead the chain very rapidly into the native structure .note that this is also true for the case the frequency distribution for ( open large circles generalized boltzmann factor ) is practically the same as that for ( full smaller circles conventional boltzmann factor ) , implying in the convergence of the folding characteristic time for the two cases , namely , ( see table and figure [ figure3 ] ) .this result confirms that ( for the present problem ) the net effect of the generalized boltzmann weight on the kinetic of the search mechanism is equivalent , from the perspective of the conventional boltzmann factor , to a specified increase in the system temperature , that is , a certain increase on thermal fluctuations .so , a specific well tuned amount of thermal fluctuation is what determines the fastest folding process . then , independent of the approach ( generalized or conventional boltzmann factor ) , and according the hypothesis that the folding instruction encoded in the residue sequence provides a folding kinetic as fast as possible ( section ii , second premise ) , is adopted as the optimum the actual characteristic folding time . however , due mostly to the peculiarities of protein systems , the folding process must be minimaly optimazed in a relatively narrow range of temperature , and for the total set of proteins of each living organism . in this sensethe two approaches are not equivalent : the fact that each target structure has a proper temperature for fastest folding could be seem as a model deficience that should be improved by approaching the problem by the nonextesive statistical mechanics .moreover , as already mentioned , the search mechanism operates equally in large temperature interval , but , once the native structure is found , the other stage of the folding process takes place the overall productive stabilization , which is strictly dependent on the temperature . indeed , for the set of protein of each organism there is a working temperature interval ( with outside of which its functionality can be seriously reduced or completely lost .therefore , the system temperature must be kept as the reference temperature , measured macroscopically , and all thermal characteristics of a nanosize body , in response to the local thermal fluctuations , should be conveniently controlled by the nonextensive parameter . for any target structure , at specific system temperature , it is always possible to adjust in order to get the optimum .but , what intrinsic factors would determine a specific value that induces the fasted folding for that specific protein ?in the present case , namely a chain evolving through the configurational space from a open chain into a compact specific globule , the straightforward idea comes from the observation that local fluctuations of should be dependent on the spatial scale .indeed , along the simulation specific traps as well wrong packing tendency are recurrent , and so the resulting effect of local thermal fluctuations is to promote a rich variety of shape and size of the globule . to see this argument in more details , let each residue of the chain be associated , as upper bound , to just one degree of freedom , which allow us to explore the relation between and the numbers of degrees of freedom of the system : , eq .( [ 3 ] ) and ( [ 4 ] ) . through this relationwe may recognize a subtle association between and the topological complexity of the native structure .one notes , firstly , the dynamic nature of the number of degrees of freedom : as the chain degree of compactness changes in the course of the time , changes accordingly .so , for an open chain we have , and for a fully compact globule . using as a limiting condition, one gets a kind of upper bond for , that is , . actually , along the folding process, energy and topological traps must be overcome until the target structure is reached .such recurrent traps keep the chain for relatively long time in wrong conformations of different degrees of compactness , which must be disassembled so that the folding process can be restarted .therefore , as the chain suffers the thermal effects differently , depending on its compactness , the simulation process should be governed by a variable instead of a fixed parameter . however , depending on the number of traps and their peculiarities determined by the combination of the complexity of the native structure and the chain sequence , a specific ( kind of average ) may be associated to each target structure ; this is what we did in this work and is showed in table for three different target structures .clearly , a direct inspection of this process can be carried out using a dynamic process that changes ( appropriately ) the value of along the simulation .the implementation of this idea is now in progress .the hypotheses about the folding reaction as a two independent stages ( search and stabilization ) enabled us to place emphasis just on the search mechanism as an universal process guided by the hydrophobic force , which performs equally in a large range of temperatures .the premise about the fastness of the folding process ( necessary to prevent protein aggregation ) was used in order to associate the nonextensive parameter to each native structure .the comparison between the two approaches , namely the nonextensive and the conventional mechanics , suggests that suitable thermal fluctuations adequately achieved only in the nonextensive context drives the chain through the fastest possible courses to the native conformation , as shown in figure [ figure2 ] .the generalized boltzmann factor a qualitatively equivalent effect with respect to the conventional boltzmann factor , that is , to enlarge the chance of removing the chain from energetic or topological traps .although their extremum effects on the transition probabilities between two consecutive configurations are energetically shifted ( figure [ figure1 ] ) , appropriate combinations of and such as and for example , can produce practically the same folding time distribution ( figure [ figure3 ] ) , determining the same optimum characteristic folding time the well known u shape dependence of on temperature , shown in figure [ figure2 ] , has been commonly attributed exclusively to peculiarities of the chain sequence or to the complexity of the target structure . indeed , sequences are usually generated and tested for its ability to fold rapidly in an small and specific range of temperature , even knowing that this procedure eliminates many suitable structures that would otherwise be important for kinetic studies .but a new perspective emerges when local thermal fluctuations experienced by nanoscale structures is associated with its spatial characteristics ( as its size and degrees of freedom ) , by means of the parameter from the nonextensive statistical mechanics .specifically , such as chaperone that assists the folding , well tuned thermal fluctuations help to disassemble chain segments wrongly collapsed , improving the fastness of the folding process ; otherwise , using the conventional statistic mechanics , it would be achieved only at higher temperature of the reservoir .therefore , extending this scenario to real protein systems we may visualize the two main driving forces ( entropic forces compacting the chain and local thermal fluctuations tending to open it ) suporting a continuous process of folding / unfolding until , eventually , the neighborhoods of the native state is reached . at this point , and only under this condition , the native structural peculiarities and chain energetic interactions , as encoded along the chain sequence , would be associated in a cooperative and fully productive way , guarantying the globule overall stability . as a final remark , we recall that the exploratory analysis summarized in table 1 suggests that increases with the topological complexity of the target structure .indeed , treating as a variable , let us say , we get essentiality the same result , that is : the characteristic time converges to the same obtained using as a parameter . in a preliminary investigation , was functionally linked to the instantaneous radio of gyration , which was used as a measure of the chain compactness ( degrees of freedom ) .accordingly , for each of several distinct target configurations investigated , the resulting -distribution is characterized by one or two peaks around the constant used as a parameter .a. k. rajagopal , c. s. pande , s. abe , _ nano - scale materials : from science to technology , _ pages : 241 - 248 ( 2006 ) - workshop on nano - scale materials - from science to technology ; apr 05 - 08 , 2004 puri india .
protein folding is a universal process , very fast and accurate , which works consistently ( as it should be ) in a wide range of physiological conditions . the present work is based on three premises , namely : ( ) folding reaction is a process with two consecutive and independent stages , namely the search mechanism and the overall productive stabilization ; ( ) the folding kinetics results from a mechanism as fast as can be ; and ( ) at nanoscale dimensions , local thermal fluctuations may have important role on the folding kinetics . here the first stage of folding process ( search mechanism ) is focused exclusively . the effects and consequences of local thermal fluctuations on the configurational kinetics , treated here in the context of non extensive statistical mechanics , is analyzed in detail through the dependence of the characteristic time of folding ( ) on the temperature and on the nonextensive parameter . the model used consists of effective residues forming a chain of 27 beads , which occupy different sites of a infinite lattice , representing a single protein chain in solution . the configurational evolution , treated by monte carlo simulation , is driven mainly by the change in free energy of transfer between consecutive configurations . we found that the kinetics of the search mechanism , at temperature , can be equally reproduced either if configurations are relatively weighted by means of the generalized boltzmann factor ( ) , or by the conventional boltzmann factor ( ) , but in latter case with temperatures however , it is also argued that the two approaches are not equivalent . indeed , as the temperature is a critical factor for biological systems , the folding process must be optmized at a relatively small range of temperature for the set of all proteins of a given organism . that is , the problem is not longer a simple matter of renormalization of parameters . therefore , local thermal fluctuation on systems with nanometric components , as proteins in solution , becomes a important factor affecting the configurational kinetics . as a final remark , it is argued that for a heterogeneous system with nanoscopic components , should be treated as a variable instead of a fixed parameter .
throughout the years , brain theory incarnated in various forms following contemporaneous technology .as expressed by braitenberg : _ ... fascinating aspects of the latest technology are always creeping into science and being turned into subconscious motives for theory . for brain science ,in the nineteenth century it was optics , with its idea of projection , and in the twentieth century it was radio engineering , with its logical circuits ... "_. in the nineties , the fundamental concepts behind the physics of complex systems , motivated us to work on ideas that now seem almost obvious : 1 ) the mind is a collective property emerging from the interaction of billions of agents ; 2 ) animate behavior ( human or otherwise ) is inherently complex ; 3 ) complexity and criticality are inseparable concepts .these points were not chosen arbitrarily , but derived , as discussed at length here , from considering the dynamics of systems near the critical point of a order - disorder phase transition .simply put , this view considered the brain as _ just another _ dynamical system at criticality , knowingly non unique .the brain seen as a dynamical object as discussed in these notes is grounded on accessible empirical evidence : as brain activity unfolds in time at all spatial scales , different patterns evolve .proper measurements provide quantitative information regarding these patterns ( for example , neuron spike trains , local field potentials , metabolic signals , behavioral measures , etc ) .the question is whether is it possible to explain all these results from a single fundamental principle , as it is the tradition in physics . and, in case the answer is affirmative , what does this unified explanation of brain activity implies about goal oriented behavior ?we will submit that , to a large extent , the problem of the dynamical regime at which the brain operates it is already solved in the context of critical phenomena and phase transitions .indeed several fundamental aspects of brain phenomenology have an intriguing counterpart with dynamics seen in other systems when posed at the edge of a second order phase transition .the paper is organized as follows : first , emergent complex phenomena and the intimate connection between criticality and complexity will be described .this will be in section ii , which reviews earlier work on ant s swarm , a metaphorical mind in itself , which is mathematically very close to nowadays models of collective decision - making .the large scale analysis of brain dynamics will be introduced in section iii , where the experimental results indicating criticality are discussed in detail .section iv provides examples of how complexity and criticality manifest themselves in a relatively smaller scale ( scale of a few millions of neurons ) .as mentioned above , the statistical properties of the brain dynamics must be reflected in the dynamics of behavior .this is treated in section v where recent reports seems to show such correspondence .section vi , dwells into the evolutionary argument by which the brain and the animate behavior must be critical anytime an organism needs to survive and evolve in an environment which , by thermodynamic reasons , is also critical .secton vii is dedicated to discuss the tendency in the field to consider equilibrium models driven by external noise to accommodate the empirically observed fluctuations .section viii closes the paper with a prospective of some relevant issues to pursue further .emergence refers to the observation of dynamics that is not expected from the systems equations of motion and , almost by ( circular ) definition , is exhibited by complex systems . as discussed at length elsewhere , three features are present in complex systems : ( i ) they are _ large _ conglomerate of _ interacting _ agents , ( ii ) each agent own dynamics exhibits some degree of _ nonlinearity _ and ( iii ) energy enters the system .these three components are necessary for a system to be able to exhibit , at some point , emergent behavior .it is well established that a number of isolated linear elements can not produce unexpected behavior ( mathematically , this is the case in which all motion can be formally predicted ) .an inspiring example of emergence of complexity is the dynamic of swarms , which we used in the past as a toy model to understand how cognition could arise in neural networks .of course , the case for social insects representing a paradigm of cooperative dynamics was extensively discussed before .the uncertainty was centered at what collective property of foraging ants allows the emergence of trails connecting the nest with the food sources , forming structures spanning sizes several order of magnitude larger than any of the individual s temporal or spatial scales . to clarify that, millonas introduced a spatially extended model of what he called `` protoswarm '' .his objective was to understand the microscopic mechanism by which relatively unsophisticated ants can build and maintain these very large macroscopic structures . in the swarm model, there are two variables of interest , the organisms ( in large number ) and a field , representing the spacial concentration of a scent ( such as pheromone ) . as in real ants , the model s organismsare influenced in their actions by the scent field and in turn they are able to modify it by depositing a small amount of scent in each step .the scent is slowly evaporating as well .the organisms only interact through the scent s field .the model is inspired in the behavior of real ants , in which they are exposed to bridges connecting two or more areas where the ants move , feed , explore , etc .eventually they will discover and cross one of the bridges .as it is illustrated in figure [ puentecitos]a they will come to a junctions where they have to choose again a new branch , and continue moving . since ants both lay and follow scent as they walk, the flow of ants on the bridges typically changes as time passes . in the example illustrated in figure 1a , after a while most of the traffic will eventually concentrate on one of the two branches .the collective switch to one branch is the emergent behavior , something that can be understood intuitively on the basis of the positive feedback between scent following , traffic , and scent laying . what is less obvious is linking these ideas with the microscopic rules governing the dynamics of a single ant .numerous mathematical models and computer simulations were able to capture the ants behavior in the bridge experiment . .the bridge experiment results can be used as building blocks to more sophisticated settings , such as ants freely exploring an extended arena .the insight comes from a rather clever way that millonas s model discretized space .the model can be viewed , for descriptive purposes , as a network constructed by connecting each point of a square lattice to its eight nearest neighbors , as in the cartoon of figure [ puentecitos]b .thus , at each step each ant makes a decision to choose one of eight bridges ; and deposits a fixed amount of pheromone as it walks .the decision is based on the amount of scent at each of the eight locations .the ants sensory apparatus embedded in a physiological response function was modeled following biological realism , having two parameters : one which could be considered analogous to gain and the other the inverse of sensory capacity ( or dynamic range ) .the plot in figure [ antphase ] condenses the results from the numerical simulations with different values for the physiological response function . at each combination of the explored and valuesthere is a square plot which depicts the locations of each ant at the last ten steps of the simulation .it can be seen that ants converge to different behaviors depending on the parameter values . for values of both small gain andsmall dynamic range ants execute a random path , resulting in the plots fully covered , as in the right bottom corner .thus , because of the low sensitivity ants are just making random choices at each juncture . for large enough gain( top left corner ) , ants senses saturate resulting in clusters of immobile ants in the same attracting spot .it is between these two states , one disordered and the other frozen , that the swarm can organize and maintain large structures of traffic flow as those seen in nature .the stability analysis in makes straightforward to understand the transition between the disordered walks and the complex structures of trails ( line and circles in figure [ antphase ] ) .intuition already suggests that the model s , values at which the order - disorder transition happens must depend on the number of ants able to reinforce the scent field . in figure[ transition ] results from several hundreds runs with fixed parameters and increasing density of ants are shown .the degree of collective order was evaluated by the proportion of ants `` walking '' on lattice points having above average scent concentration ( see further details in ) .the top panel shows a plot of the results where the dots indicate the outcome of each individual run ( with different initial conditions ) and circles the average of all runs . for low ants density the expected random behavioris observed , with equal likelihood for ants to be in or out of a high scent field . for increasing number of antsthe swarm suddenly starts to order until it reaches the point in which the majority of the ants are walking on a field with scent concentration larger than the average .notice that , as the density approaches the transition , the amplitude of the order parameter s fluctuation increases ( see bottom panel of figure 3 ) , a divergence which is generic for criticality .thus , at the transition , large trial - to - trial variability is found for repeating realizations of the same numerical experiment .the most significant point here is to note that the best performance for the swarm , namely where long trails are formed ( see figure 2 ) corresponds also to the conditions for maximum variability .thus maximum variability and performance coexist .these results were the first to show the simplest ( local , memoryless , homogeneous and isotropic ) model which leads to trail forming , where the formation of trails and networks of ant traffic is not imposed by any special boundary conditions , lattice topology , or additional behavioral rules .the required behavioral elements are stochastic , nonlinear response of an ant to the scent , and a directional bias .there are other relevant properties , discussed in detail elsewhere , that arise _ only _ at the region of ordered line of traffic , including the ability to reconstitute trails and amplify weak traces of scent , in analogy to memory traces .the conclusions important to these notes are that simple local rules allow the emergence of complex self organized patterns , which extend in a non local way beyond the scale of a single agent and which appear suddenly in a dynamical regime between random and uniform behavior , the `` critical state ''. the remaining of these notes will show evidence of an analogous behavior when neurons interact nonlinearly in the human brain , giving rise to the emergence of complex non - local patterns , as well as the functional and behavioral consequences of these patterns .to visualize the collective activity of the individual constituents of the human brain ( neurons ) is harder , of course , than to contemplate the ants of the previous sections . for this purposedifferent methodologies have been developed which are capable to register this activity at different timescales and with different limitations .large scale activity ( resulting from the averaging of thousands of neurons ) can be measured with excellent spatial and good temporal resolution using fmri ( functional magnetic resonance imaging ) .this non - invasive technique allows indirect measurements due to metabolic and oxygenation changes correlated with synaptic activity , finally encoded in the bold ( blood oxygen level dependent ) signal .patterns of global activity can be analyzed studying the relationship between the behavior of different members of the collective , in this case , regions of cortical tissue on the millimeter scale .functional interactions between these areas can be studied comparing their bold signal time - courses .these functional interactions are not only present when the brain engages in a task but also during rest ( i.e. spontaneous activity , in contrast to evoked activity ) . during the remainder of thisnotes we will focus mostly on resting state brain dynamics .these interactions can be represented using the concept of a network or a graph .a graph consists of nodes connected by links , in this case the nodes are voxels ( the smallest cubic regions that the spatial resolution of the method can resolve ) and links represent coordinated activity between these two linked voxels .different measures of coordination between the time course of the bold signal may be employed , the simplest being linear correlation .functional connectivity networks can be constructed computing the linear correlation between bold signal pairwise for all voxels and introducing a link between voxels if they are correlated beyond an arbitrary threshold ( for a diagram of the procedure see figure [ procedure ] ) . for a wide and reasonable range of thresholds ,functional brain networks thus constructed are scale free , this is , the degree histogram of the voxels follows a power law of exponent approximately 2 ( where degree is defined as number of neighbors in the network ) .also , functional connectivity networks have the small world property ( the mean number of links that must be crossed to connect a given pair of nodes is small relative to the local structure of the network ) and are assortative ( connectivity is stronger between nodes of similar degree ) .these properties are shared with a large number of networks constructed from different biological and social systems .they have strong implications regarding information transfer , stability and general robustness of the networks .together , the scale free and small world properties imply the presence of long range ( non - local ) functional interactions between brain regions .most important to these notes , functional connectivity networks constructed from time - courses extracted from a paradigmatic example of a system with critical dynamics ( ising model for ferromagnetic materials ) are virtually indistinguishable from brain functional connectivity networks .this striking similarity between both networks is not imposed in the model : it arises as a very general consequence of critical dynamics , thus giving an explanation to the topological properties of brain functional connectivity networks solely based on the concept of criticality .functional connectivity networks are a useful global description of the brain , albeit a static one .they can be regarded as the `` skeletons '' of the dynamical systems , over which patterns of activity unfold over time .deeper insight is gained from dynamical analysis ( temporal and spatiotemporal ) of large scale brain activity .imaging experiments have gathered evidence showing that ( even in the absence of explicit perception and cognitive performance ) this activity displays complex behavior . in the purely temporal domain , it is not homogeneous nor periodic ( as eeg , meg , fmri , etc , recordings indicate ) neither there is a representative or predominant frequency , as shown by the 1/f decay of spectral density . in the spatiotemporal domain , fmri whole brain recordingsreveal how activity explores complex patterns with long range correlations and anticorrelations .strikingly , these patterns can be reduced to a small number of prototypical ones using a mathematical technique termed independent component analysis ( ica ) .careful examination shows that these structures appear and disappear over time in the fashion of `` passing clouds '' that last only a few seconds .it has been shown that the spatial maps of these resting state independent components ( also called resting state networks or rsns ) can be associated with different cognitive functions and are strongly correspondent with activation maps obtained during task performance .this correspondence supports a dynamical view of resting state , in which activity continuously explores patterns ( which are `` metastable states '' ) only to get temporarily locked in one of them as a task is executed or a sensory system stimulated .not surprisingly , it is when operating at the critical point that a dynamical system becomes most efficient in the _ flexible _ exploration of these metastable states .this dynamical view of resting state need to be exploited given the strong support from recent experiments involving simple motor tasks and resting state .briefly , it has been shown that average bold signals associated with spontaneous co - activations of key motor cortical areas strongly resemble co - activations during evoked activity , not only in location and temporal shape but also in intensity .in other words , spontaneous activations are completely analogous to evoked activations , except that the first ones are transient and their timing is uncertain .this correspondence is shown in figure [ rbeta ] for regions in the primary motor cortex , supplementary motor area and cerebellum , during finger tapping and rest .the continuous exploration of these patterns during resting state may have strong neurophysiological relevance , yet to be fully understood .it must be noted that while resting - state and spontaneous brain activity seems an ill defined concept , the bulk of results described in this last paragraph has not only been observed in conscious behaving humans but also during sleep and anesthesia .a great consistency is observed across subjects : approximately 10 minutes of fmri measurements from a single subject are enough to reveal the main resting state networks ( as shown in figure [ dantenature ] ) this dynamical scenario is highly consistent with the brain operating in a critical regime .a final observation regarding whole brain measurements is that the lack of measurement scale implied by critical dynamics is also manifest in the invariance of the two - point correlation function of the bold signal after a normalization procedure ( spatial coarse graining ) is repetitively carried out .additional evidence is derived from the computation of the correlation length of the bold signal fluctuations , which are known to diverge near the critical point .figure [ dani ] shows experimental results obtained from human fmri at rest indicating that correlation length is not constant but diverges with the size of the cluster considered. . this demonstration is a landmark of critical phenomena , together with previous observations of scale invariance at large scaleadd weigth to the contention of criticality as the brain s dynamical state .historically , the first direct demonstration of collective critical dynamics was the observation that a cultured slice of neurons supported `` avalanches '' , this is , intermittent bursts of activity that spread up to the whole system .this dynamical state is halfway between highly synchronized oscillatory activity and disordered noise . even though pairwise correlations are low , and with highest probably only a small number of neurons fire in synchronized fashion , occasionally activity spreads in the form reminiscent of an avalanche in per bak s sandpile model .power laws are observed for event size ( number of electrodes registering electrical activity ) with exponents agreeing with those of a critical branching process .these results have been replicated in other experimental settings , including cortical measurements in awake , behaving monkeys and rats .modeling efforts to understand the precise neuronal cause of avalanches are currently being made .certain models are able to replicate these scale free densities while explicitly avoiding criticality , however , recent numerical results show that their predicted exponents are in the wrong order of magnitude as those experimentally observed .also , even if avalanches are a dynamical process , explanations of scale free behavior can be based on the underlying connectivity of the neurons .however , directly incorporating long range correlations in the structural connectivity of the collectivity solves one problem ( integration ) but succumbs to a different one ( segregation ) .critical dynamics is , thus , a natural constraint to incorporate into a model of neuronal avalanches. models of neurons with activity dependent synaptic coupling undergoing self organized criticality reproduce statistics of neuronal avalanches , even with trivial ( random ) connectivity of its elements . finally , even though the most salient features of experimental findings regarding neuronal avalanches are power laws probability densities for size and event durations , there are other predictions to be fulfilled if the underlying dynamics are in fact critical . in short , these are : i ) time scales separation between the dynamics of the triggering event of the avalanche and the avalanche itself , ii ) stationary avalanche size statistics regardless of avalanching rate fluctuations ( excluding non - homogeneous poisson processes ) , iii ) omori s law for earthquakes must apply to avalanche probabilities after and before main events , iv ) average size of avalanches following a main avalanche decays as an inverse power law , v ) avalanches spreads spatially on a fractal .recently , it has been reported that experimental data in fact supports these predictions , thus narrowing the search of models of avalanches to those undergoing critical dynamics .as exposed in the previous sections , the hypothesis that collective brain dynamics operates at a critical point has received plenty experimental support .this evidence spans a wide range of scales both in time and space and leads in a natural way to a discussion of consequences at a behavioral level .simply put : if brain activity is critical , signatures of criticality must be as well observed in behavior and perception .an old unsolved problem of psychophysics relates to the need of a very ample dynamic range of neuronal responses , spanning several orders of magnitude .for example , visual perception is known to adapt from very dimly lit environments ( such as caves , forests , etc ) to landscapes of high luminosity ; however , the dynamic range of an isolated neuron ( spanning at most a single order of magnitude ) is too limited to do so .the key to solve this problem resides in the consideration of a connected network of excitatory neurons which has an amplification factor of one . in this model , neurons operate in a critical point , in contrast to a subcritical regime in which sensory inputs quickly die , or a supercritical regime , in which any input spreads in an explosive fashion throughout the network .these three possibilities are exemplified in figure [ kopelli ] .thus , the fact that a system at the critical point has the greatest dynamical range , is now the key to solve the dilemma of human perception optimized over several orders of magnitude . on the other hand ,animal behavior has been shown to reflect the critical dynamics of brain activity .recordings of spontaneous motion activity of rats during large periods of time reflects scale free distribution for length of movement times and pauses .this result , observed in an animal model and controlled laboratory conditions , has similarities with reports on human activities such as writing of letters , emails , web browsing and motion initiation .together , these results suggest the presence of behavioral signatures of critical dynamics in animals and human beings .the ingredients for complex emergent behavior presented in the first section of these notes ( a large number of energy - driven nonlinearly interacting elements ) are not only present in the human brain , but are also ubiquitous in the whole natural world .humans and other animals navigate in this complex critical world , which as such offers constantly surprises and unexpected events .one can argue that in a subcritical and uniform world a brain is a superfluous organ since there is nothing to learn , on the other hand , an ever - changing supercritical world offers no regularities to be learned . in the middle of these two extreme situationsthere is a real need for a learning device .the coupling between a critical world and a critical brain is best exemplified in the playing habits of children .young children become quickly bored of inanimate toys ( even the addition of sophisticated motion mechanisms does not dramatically improve this situation ) , however , they obtain unending fun from animal pets for long periods of time .the reason behind is that animate behavior is a mixture of order and surprise in the same sense than the spatiotemporal patterns of a system at the critical point are composed of a blend of order and disorder .thus , the need of a brain stems from the fact that the world around is critical , thus , in turn evolutionary pressures constraint brains to operate in a critical regime .brains must not only be capable of mere learning : the ability to forget is central to adaptive behavior . in a sub - critical brain state of highly correlated activity , memories would be `` frozen '' and ever present .on the other hand , a supercritical brain would have its activity patterns changing wildly and constantly in time , resulting in the inability to retain any memories .the conclusion is that , in order to adapt to a complex world that constantly presents us with novelties , evolution has created brains and forced them operate at the critical point .as discussed in section ii , criticality is a universal dynamics exhibited by a large class of systems .consequently it offers a unified theoretical framework to explain a myriad of apparently unrelated observations. we will briefly provide an illustration of this possibility , discussing work recently reported , where the current theoretical understanding can be fruitfully applied . in a series of papers the variance andmean - based spatial fmri patterns obtained in subjects of different ages were compared , concluding that the descriptive value of the variance was greater than the averages .the authors suggested that _ examination of bold signal variability may reveal a host of novel brain - related effects not previously considered in neuroimaging research"_. additional work ( by the same authors ) went even further , revealing that bold signal variability is greater at younger age , confirming that _ _ younger , faster , and more consistent performers exhibited significantly higher brain variability across tasks , and showed greater variability - based regional differentiation compared to older , poorer performing adults"__ .the origin of this variability was explained by suggesting that : _ the interplay between local and global dynamics governs the spatiotemporal configuration of the brain s functional architecture and keeps the system in a high - energy state , at the ` edge of instability ' among a number of different states and configurations . " _ , while the switch between such configurations was suggested to happen by the action of noise originated in the following manner : _ intrinsic neural noise stochastic fluctuations in information transfer caused by imprecise timing of cellular processes serves to nudge the system from one state to another and thus confers the capacity to make fluid and adaptive transitions between different states and reconfigure either spontaneously or in response to external ( task ) demand . " _ besides the obvious merits of calling attention to the variability , the crucial question here should be how much real understanding was gained by finding that the variability was more informative than the average .but lets postpone the answer to this question until we consider yet another example .the second example is concerned with a very interesting book dedicated recently to the topic of noise in the brain , which in its very first five lines reads : _ the relatively random spiking times of individual neurons produce a source of noise in the brain .the aim of this book is to consider the effects of this and other noise on brain processing .we show that in cortical networks this noise is an advantage , for it leads to probabilistic behavior that is advantageous in decision making , by preventing deadlock , and is important in signal detectability"_. this is not an isolated quote taken out of the proper context .the authors enumerate a three reasons why the brain is inherently noisy and stochastic" including sensory noise arising external to the brain , cellular noise due to stochastic opening and closing of ion channels , and synaptic noise .these two examples are in line with nearly all detailed models of neuronal function which require ad - hoc noise inputs to function properly .simply put , without noise the system is at equilibrium , stuck on a stable state .all these analysis overlook a crucial question : where does this ( fine tuned or not ) noise comes from ? if it is generated by specific systems within the brain , then experimental evidence ( a noisy center " ? ) should be found pointing to where these systems are and how they work .of course , sarcasms aside , no such center would be found . instead ,ad - hoc noisy driving is not required by the theory of critical brain dynamics , since noise is self - generated by the collective dynamics which spontaneously fluctuates near the critical point as was shown in figure 3 .it is well known , that large fluctuations are inherent to a system approaching the critical point , and in that sense variability " is perfectly understood as part of the critical dynamics , and not a force external to the system . in our opinion , the often conjectured reasons for the noise presence , assigning a purpose to it , are _ teleological _ dead - ends that postpone true understanding .the fact is that the correct interpretation of the noise is a much deeper issue , which reflects the general tendency to consider only equilibrium models .these models achieve a better agreement with empirical observations only with the addition of _ external _ noise ( see for instance chapter 12 in or work compiled by steyn - ross recently ) .this deformation is not exclusive of neuroscience and its consequences are far reaching .for example , similar views in other complex systems include the explanation of episodes with very large number of species going extinct throughout earth history by adding external noise , such that for each extinction an out of space event ( meteorite ) hits the earth .in contrast , non - equilibrium models show that similar exceptionally rare and large extinctions ( and their mirror situations , namely speciations ) can be inherent of the co - evolutionary species dynamics , and that large extinctions could have happened in absence of meteorites . in these type of models , variability and disparate fluctuationsare ( at criticality ) typical .another example belongs to macro - economics , as the late per bak hammered many times in his writings and lectures , criticizing models in which each market crash would have required an event that drove the shares to plunge . in analogy, historians will assure us that in sarajevo on the 28th of june of 1914 a member of the black hand , a serbian nationalist secret society , assassinated archduke franz ferdinand , heir to the austro - hungarian throne .the equilibrium view will identify this single event ( noise ? ) as responsible to trigger the mobilization of 65 millions soldiers , 7 millions missing , 8 millions killed and 21 millions wounded .in contrast , the non - equilibrium approach will assert that the first war world ( and other infrequently large conflicts ) will eventually happen in absence of an isolated gunman .thus , coming back to the brain , where its main dynamics are fluctuations , it need to be recognized that having to add fluctuations ( noise ) to the model equations to get fluctuating dynamics is a serious limitation .it seems reasonable that future work consider that the differences between modeling a noisy brain " or a critical brain " are significant and relevant , because the gap separating these two views is conceptually as large as for physics is the difference between equilibrium and non - equilibriums dynamics .the study of collective dynamics and critical phenomena was pioneered by physicists during the past century , in order to solve problems related with material science and solid state physics .it is now clear that the theoretical framework used for these problems is very general and can be applied to understand an impressive range of phenomenology from many disciplines .emergent dynamical properties of interacting agents ( phase transitions , divergence of correlation length and fluctuations , maximum dynamical range , etc ) are far from explicit in the individual equations of motion , thus , a reductionist approach will be ( by its very definition ) unable to explain this emergence .pure reductionism has in fact been abandoned in a large list of fields which study complex many particle systems , from economics and sociology to molecular biology .it is surprising that this revolution has only recently began to have an impact on neuroscience , a discipline which deals more than any other with a collective of interacting agents . still today , a large number of published results stem from detailed experiments or simulations of neurons or networks of millions of neurons , including sometimes great biological detail but no considerations related with the statistical physics of collective phenomena in large systems .however , after the realization that theories of brain function must be compatible with the laws of collective behavior , a non - return point has been reached : a rapidly increasing number of reports being deal with theoretical and experimental evidence grounding the emergent critical dynamics hypothesis .researchers now find that paradoxical situations and seemingly disconnected facts can be re conciliated within the critical brain dynamics theory , without need of complicated ad - hoc hypothesis .a good example of this is the integration / segregation dilemma , a problem haunting neuroscientists since the beginnings of the discipline : how does the brain manages to integrate information from different modalities into a single decomposable scene , while making easy to segregate particular aspects of this scene at will ? or in other terms : how does brain activity from different regions coordinates without collapsing into a single block of uniform dynamics ?a concrete example is the synchronous coexistence of multiple operational modules studied by fingelkurts & fingelkurts .many connectionistic approaches have been taken to this problem , implying that coordination is hardwired into anatomical connectivity .these proposed solutions are able to solve one problem ( integration ) but still fail to explain the ease of brain dynamics to segregate . within the theory of critical brain dynamics , however , this situation arises naturally : even systems with trivial structural connectivity ( for example , the first - neighbor connections of the ising model ) achieve ( transient ) long range correlations at the critical point .any scientific theory needs to explain and integrate previous facts , solve apparently paradoxical situations , and make predictions amenable to experimental verification . along these noteswe have shown how these requirements are fulfill by the theory of emergent brain collective critical dynamics .the transfer of ideas from statistical physics to other disciplines is a young and exciting endeavor , even younger in neuroscience , however exciting results have already been achieved . only with the realization that brain is a collective ( and henceforth must follow the physical laws that govern collectives ) a deep understanding of brain and human behavior will be achieved .work supported by nih ( usa ) and by conicet ( argentina ) .e.t . was supported by an estmulo fellowship from the university of buenos aires .s. achard , r. salvador , b. whitcher , j. suckling , e. bullmore .resilient , low - frequency , small - world human brain functional network with highly connected association cortical hubs ._ j neurosci _ * 26 * , 63 ( 2006 ) .v. braitenberg . some arguments for a theory of cell assemblies in the cerebral cortex . in _ neural connections , mental computation _ , eds .l nadel , l cooper , p. culicover , m harnish .( the mit press , cambridge , ma , 1989 ) . p. expert , r. lambiotte , d.r .chialvo , k. christensen , h.j .jensen , d.j .sharp , f. turkheimer .self - similar correlation function in brain resting state fmri ._ j. r. soc .interface _ * 8 * , 472479 ( 2011 ) .fox , a.z .snyder , j.l .vincent , m. corbetta , d.c .van essen , m.e .raichle . the human brain is intrinsically organized into dynamic , anticorrelated functional networks ._ * 102 * , 9673 ( 2005 ) .swarms , phase transitions , and collective intelligence . in : _artificial life iii .langton ( ed . ) , ( proc .santa fe institute studies in the sciences of complexity , addison - wesley , reading , ma , 1994 ) m.m . millonas . a nonequilibrium statistical field theory of swarms and other spatially extended complex systems .in : _ pattern formation in physical and biological sciences . _p. claudis , ( ed . ) , ( proc .santa fe institute studies in the sciences of complexity , addison - wesley , reading , ma , 1994 ) j.m .pasteels , j.l .deneubourg , s. goss .transmission and amplification of information in a changing environment : the case of ants . in :_ laws of nature and human conduct . _( prigogine , i. & sanglier , m. , eds . ) ( gordes , brussels , 1987 ) .t. petermann , t.c .thiagarajan , m.a .lebedev , m.a .nicolelis , d.r .chialvo , d. plenz .spontaneous cortical activity in awake monkeys composed of neuronal avalanches .usa _ * 106 * , 15921 ( 2009 ) .e. rolls & g. deco ._ the noisy brain .stochastic dynamics as a principle of brain function . _( oxford univ .press , uk , 2010 ) r. salvador , j. suckling , m.r .coleman , j.d .pickard , d. menon , e. bullmore .neurophysiological architecture of functional magnetic resonance images of human brain ._ cerebral cortex _ * 15 * , 1332 ( 2005 ) .smith , p.t .fox , k.l .miller , d.c .glahn , p.m. fox , c.e .mackay , n. filippini , k.e .watkins , r. toro , a.r .laird , c.f .correspondence of the brain s functional architecture during activation and rest .usa _ * 106 * , 13040 ( 2009 ) .e. tagliazucchi , p. balenzuela , d. fraiman , p. montoya , d.r .spontaneous bold event triggered averages for estimating functional connectivity at resting state , _ neurosci .letters _ * 488 * , 158 ( 2011 ) .van den heuvel , c.j .stam , m. boersma , h.e .hullshof pol .small - world and scale - free organization of voxel - based resting - state functional connectivity in the human brain ._ neuroimage _ * 43 * , 528 ( 2008 )
the unique dynamical features of the critical state can endow the brain with properties which are fundamental for adaptive behavior . this proposal , put forward with per bak several years ago , is now supported by a wide body of empirical evidence at different scales demonstrating that the spatiotemporal brain dynamics exhibits key signatures of critical dynamics previously recognized in other complex systems . the rationale behind this program is discussed in these notes , followed by an account of the most recent results , together with a discussion of the physiological significance of these ideas .
a common strategy in visual object recognition tasks is to combine different image representations to capture relevant traits of an image .prominent representations are for instance built from color , texture , and shape information and used to accurately locate and classify the objects of interest .the importance of such image features changes across the tasks .for example , color information increases the detection rates of stop signs in images substantially but it is almost useless for finding cars .this is because stop sign are usually red in most countries but cars in principle can have any color . as additional but nonessential features not only slow down the computation time but may even harm predictive performance , it is necessary to combine only relevant features for state - of - the - art object recognition systems . we will approach visual object classification from a machine learning perspective . in the last decades , support vector machines ( svm ) been successfully applied to many practical problems in various fields including computer vision .support vector machines exploit similarities of the data , arising from some ( possibly nonlinear ) measure .the matrix of pairwise similarities , also known as kernel matrix , allows to abstract the data from the learning algorithm .that is , given a task at hand , the practitioner needs to find an appropriate similarity measure and to plug the resulting kernel into an appropriate learning algorithm .but what if this similarity measure is difficult to find ? we note that and were the first to exploit prior and domain knowledge for the kernel construction . in object recognition , translating information from various image descriptors into several kernels has now become a standard technique .consequently , the choice of finding the right kernel changes to finding an appropriate way of fusing the kernel information ; however , finding the right combination for a particular application is so far often a matter of a judicious choice ( or trial and error ) . in the absence of principled approaches , practitioners frequently resort to heuristics such as uniform mixtures of normalized kernels that have proven to work wellnevertheless , this may lead to sub - optimal kernel mixtures .an alternative approach is multiple kernel learning ( mkl ) that has been applied to object classification tasks involving various image descriptors .multiple kernel learning generalizes the support vector machine framework and aims at learning the optimal kernel mixture and the model parameters of the svm simultaneously . to obtain a well - defined optimization problem ,many mkl approaches promote sparse mixtures by incorporating a -norm constraint on the mixing coefficients .compared to heuristic approaches , mkl has the appealing property of learning a kernel combination ( wrt . the -norm constraint ) and converges quickly as it can be wrapped around a regular support vector machine .however , some evidence shows that sparse kernel mixtures are often outperformed by an unweighted - sum kernel . as a remedy , propose -norm regularized mkl variants , which promote non - sparse kernel mixtures and subsequently have been extended to -norms .multiple kernel approaches have been applied to various computer vision problems outside our scope such multi - class problems which require mutually exclusive labels and object detection in the sense of finding object regions in an image .the latter reaches its limits when image concepts can not be represented by an object region anymore such as the _ outdoor_,_overall quality _ or _boring _ concepts in the imageclef2010 dataset which we will use . in this contribution , we study the benefits of sparse and non - sparse mkl in object recognition tasks. we report on empirical results on image data sets from the pascal visual object classes ( voc ) 2009 and imageclef2010 photoannotation challenges , showing that non - sparse mkl significantly outperforms the uniform mixture and -norm mkl .furthermore we discuss the reasons for performance gains and performance limitations obtained by mkl based on additional experiments using real world and synthetic data .the family of mkl algorithms is not restricted to svm - based ones .another competitor , for example , is multiple kernel learning based on kernel discriminant analysis ( kda ) .the difference between mkl - svm and mkl - kda lies in the underlying single kernel optimization criterion while the regularization over kernel weights is the same . outside the mkl family, however , within our problem scope of image classification and ranking lies , for example , which uses a logistic regression as base criterion and results in a number of optimization parameters equal to the number of samples times the number of input features .since the approach in uses a priori much more optimization variables , it poses a more challenging and potentially more time consuming optimization problem which limits the number of applicable features and can be evaluated for our medium scaled datasets in detail in the future .alternatives use more general combinations of kernels such as products with kernel widths as weighting parameters . as point outthe corresponding optimization problems are no longer convex. consequently they may find suboptimal solutions and it is more difficult to assess using such methods how much gain can be achieved via learning of kernel weights .this paper is organized as follows . in section [ mlt ] ,we briefly review the machine learning techniques used here ; the following section[experiment ] we present our experimental results on the voc2009 and imageclef2010 datasets ; in section [ section : disc_toy ] we discuss promoting and limiting factors of mkl and the sum - kernel svm in three learning scenarios .this section briefly introduces multiple kernel learning ( mkl ) , and kernel target alignment . for more details we refer to the supplement and the cited works in it .given a finite number of different kernels each of which implies the existence of a feature mapping onto a hilbert space the goal of multiple kernel learning is to learn svm parameters and linear kernel weights simultaneously .this can be cast as the following optimization problem which extends support vector machines the usage of kernels is permitted through its partially dualized form : for details on the solution of this optimization problem and its kernelization we refer to the supplement and .while prior work on mkl imposes a -norm constraint on the mixing coefficients to enforce sparse solutions lying on a standard simplex , we employ a generalized -norm constraint for as used in .the implications of this modification in the context of image concept classification will be discussed throughout this paper . the kernel alignment introduced by measures the similarity of two matrices as a cosine angle of vectors under the frobenius product was argued in that centering is required in order to correctly reflect the test errors from svms via kernel alignment . centering in the corresponding feature spaces be achieved by taking the product , with is the identity matrix of size and is the column vector with all ones . the centered kernel which achieves a perfect separation of two classes is proportional to , where and and are the sizes of the positive and negative classes , respectivelyin this section , we evaluate -norm mkl in real - world image categorization tasks , experimenting on the voc2009 and imageclef2010 data sets . we also provide insights on _ when _ and _ why _ -norm mkl can help performance in image classification applications .the evaluation measure for both datasets is the average precision ( ap ) over all recall values based on the precision - recall ( pr ) curves .we experiment on the following data sets : * 1 .pascal2 voc challenge 2009 * we use the official data set of the _ pascal2 visual object classes challenge 2009 _ ( voc2009 ) , which consists of 13979 images .the use the official split into 3473 training , 3581 validation , and 6925 test examples as provided by the challenge organizers .the organizers also provided annotation of the 20 objects categories ; note that an image can have multiple object annotations .the task is to solve 20 binary classification problems , i.e. predicting whether at least one object from a class is visible in the test image . although the test labels are undisclosed , the more recent voc datasets permit to evaluate ap scores on the test set via the challenge website ( the number of allowed submissions per week being limited ) .imageclef 2010 photoannotation * the imageclef2010 photoannotation data set consists of 8000 labeled training images taken from flickr and a test set with undisclosed labels .the images are annotated by 93 concept classes having highly variable concepts they contain both well defined objects such as _ lake , river , plants , trees , flowers , _ as well as many rather ambiguously defined concepts such as _ winter , boring , architecture , macro , artificial , motion blur,_however , those concepts might not always be connected to objects present in an image or captured by a bounding box .this makes it highly challenging for any recognition system .unfortunately , there is currently no official way to obtain test set performance scores from the challenge organizers .therefore , for this data set , we report on training set cross - validation performances only . as for voc2009we decompose the problem into 93 binary classification problems . again, many concept classes are challenging to rank or classify by an object detection approach due to their inherent non - object nature .as for the previous dataset each image can be labeled with multiple concepts . in all of our experiments we deploy 32 kernels capturing various aspects of the images .the kernels are inspired by the voc 2007 winner and our own experiences from our submissions to the voc2009 and imageclef2009 challenges .we can summarize the employed kernels by the following three types of basic features : * histogram over a bag of visual words over sift features ( bow - s ) , 15 kernels * histogram over a bag of visual words over color intensity histograms ( bow - c ) , 8 kernels * histogram of oriented gradients ( hog ) , 4 kernels * histogram of pixel color intensities ( hoc ) , 5 kernels .we used a higher fraction of bag - of - word - based features as we knew from our challenge submissions that they have a better performance than global histogram features .the intention was , however , to use a variety of different feature types that have been proven to be effective on the above datasets in the past but at the same time obeying memory limitations of maximally 25 gb per job as required by computer facilities used in our experiments ( we used a cluster of 23 nodes having in total 256 amd64 cpus and with memory limitations ranging in 3296 gb ram per node ) .the above features are derived from histograms that contain _ no _ spatial information .we therefore enrich the respective representations by using spatial tilings , which correspond to single levels of the pyramidal approach ( this is for capturing the spatial context of an image ) .furthermore , we apply a kernel on top of the enriched histogram features , which is an established kernel for capturing histogram features .the bandwidth of the kernel is thereby heuristically chosen as the mean distance over all pairs of training examples .the bow features were constructed in a standard way : at first , the sift descriptors were calculated on a regular grid with 6 pixel pitches for each image , learning a code book of size for the sift features and of size for the color histograms by -means clustering ( with a random initialization ) .finally , all sift descriptors were assigned to visual words ( so - called _ prototypes _ ) and then summarized into histograms within entire images or sub - regions .we computed the sift features over the following color combinations , which are inspired by the winners of the pascal voc 2008 challenge winners from the university of amsterdam : red - green - blue ( rgb ) , normalized rgb , gray - opponentcolor1-opponentcolor2 , and gray - normalized opponentcolor1-opponentcolor2 ; in addition , we also use a simple gray channel .we computed the 15-dimensional local color histograms over the color combinations red - green - blue , gray - opponentcolor1-opponentcolor2 , gray , and hue ( the latter being weighted by the pixel value of the value component in the hsv color representation ) .this means , for bow - s , we considered five color channels with three spatial tilings each ( , , and ) , resulting in 15 kernels ; for bow - c , we considered four color channels with two spatial tilings each ( and ) , resulting in 8 kernels .the hog features were computed by discretizing the orientation of the gradient vector at each pixel into 24 bins and then summarizing the discretized orientations into histograms within image regions .canny detectors are used to discard contributions from pixels , around which the image is almost uniform .we computed them over the color combinations red - green - blue , gray - opponentcolor1-opponentcolor2 , and gray , thereby using the two spatial tilings and .for the experiments we used four kernels : a product kernel created from the two kernels with the red - green - blue color combination but using different spatial tilings , another product kernel created in the same way but using the gray - opponentcolor1-opponentcolor2 color combination , and the two kernels using the gray channel alone ( but differing in their spatial tiling ) .the hoc features were constructed by discretizing pixel - wise color values and computing their 15 bin histograms within image regions . to this end , we used the color combinations red - green - blue , gray - opponentcolor1-opponentcolor2 , and gray . for each color combination the spatial tilings , , and were tried . in the experiments we deploy five kernels : a product kernel created from the three kernels with different spatial tilings with colors red - green - blue , a product kernel created from the three kernels with color combination gray - opponentcolor1-opponentcolor2 , and the three kernels using the gray channel alone(differing in their spatial tiling ) . note that building a product kernel out of kernels boils down to concatenating feature blocks ( but using a separate kernel width for each feature block ) .the intention here was to use single kernels at separate spatial tilings for the weaker features ( for problems depending on a certain tiling resolution ) and combined kernels with all spatial tilings merged into one kernel to keep the memory requirements low and let the algorithms select the best choice . in practice ,the normalization of kernels is as important for mkl as the normalization of features is for training regularized linear or single - kernel models .this is owed to the bias introduced by the regularization : optimal feature / kernel weights are requested to be small , implying a bias to towards excessively up - scaled kernels . in general, there are several ways of normalizing kernel functions .we apply the following normalization method , proposed in and entitled _ multiplicative normalization _ in ; on the feature - space level this normalization corresponds to rescaling training examples to unit variance , we treat the multi - label data set as binary classification problems , that is , for each object category we trained a one - vs .- rest classifier .multiple labels per image render multi - class methods inapplicable as these require mutually exclusive labels for the images .the respective svms are trained using the shogun toolbox . in order to shed light on the nature of the presented techniques from a statistical viewpoint , we first pooled all labeled data and then created 20 random cross - validation splits for voc2009 and 12 splits for the larger dataset imageclef2010 .for each of the 12 or 20 splits , the training images were used for learning the classifiers , while the svm / mkl regularization parameter and the norm parameter were chosen based on the maximal ap score on the validation images .thereby , the regularization constant is optimized by class - wise grid search over .preliminary runs indicated that this way the optimal solutions are attained inside the grid .note that for the -norm mkl boils down to a simple svm using a uniform kernel combination ( subsequently called sum - kernel svm ) . in our experiments, we used the average kernel svm instead of the sum - kernel one .this is no limitation in this as both lead to identical result for an appropriate choice of the svm regularization parameter . for a rigorous evaluation, we would have to construct a separate codebook for each cross validation split .however , creating codebooks and assigning descriptors to visual words is a time - consuming process .therefore , in our experiments we resort to the common practice of using a single codebook created from all training images contained in the official split .although this could result in a slight overestimation of the ap scores , this affects all methods equally and does not favor any classification method more than another our focus lies on a _ relative _ comparison of the different classification methods ; therefore there is no loss in exploiting this computational shortcut .0.05 in [ cols="^,^,^,^,^,^,^,^",options="header " , ] furthermore , we also investigate the single - kernel performance of each kernel : we observed the best single - kernel svm ( which attained ap scores of , , and for experiment 1 ) being inferior to both mkl ( regardless of the employed norm parameter ) and the sum - kernel svm .the differences were significant with fairly small p - values ( for example , for -mkl the p - value was about ) . we emphasize that we did not design the example in order to achieve a maximal performance gap between the non sparse mkl and its competitors .for such an example , see the toy experiment of , which is replicated in the supplemental material including additional analysis .our focus here was to confirm our hypothesis that kernels in semantic concept classification are based on varying subsets of the data although mkl computes global weights , it emphasizes on kernels that are relevant on the largest informative set and thus approximates the infeasible combinatorial problem of computing an optimal partition / grid of the space into regions which underlie identical optimal weights .though , in practice , we expect the situation to be more complicated as informative subsets may overlap between kernels .nevertheless , our hypothesis also opens the way to new directions for learning of kernel weights , namely restricted to subsets of data chosen according to a meaningful principle .finding such principles is one the future goals of mkl we sketched one possibility : locality in feature space . a first starting point may be the work of on localized mklwhen measuring data with different measuring devices , it is always a challenge to combine the respective devices uncertainties in order to fuse all available sensor information optimally . in this paper, we revisited this important topic and discussed machine learning approaches to adaptively combine different image descriptors in a systematic and theoretically well founded manner .while mkl approaches in principle solve this problem it has been observed that the standard -norm based mkl often can not outperform svms that use an average of a large number of kernels .one hypothesis why this seemingly unintuitive result may occur is that the sparsity prior may not be appropriate in many real world problems especially , when prior knowledge is already at hand .we tested whether this hypothesis holds true for computer vision and applied the recently developed non - sparse mkl algorithms to object classification tasks .the -norm constitutes a slightly less severe method of sparsification . by choosing as a hyperparameter , which controls the degree of non - sparsity and regularization , from a set of candidate values with the help of a validation data, we showed that -mkl significantly improves svms with averaged kernels and the standard sparse mkl .future work will study localized mkl and methods to include hierarchically structured information into mkl , e.g. knowledge from taxonomies , semantic information or spatial priors .another interesting direction is mkl - kda .the difference to the method studied in the present paper lies in the base optimization criterion : kda leads to non - sparse solutions in while ours leads to sparse ones ( i.e. , a low number of support vectors ) .while on the computational side the latter is expected to be advantageous , the first one might lead to more accurate solutions .we expect the regularization over kernel weights ( i.e. , the choice of the norm parameter ) having similar effects for mkl - kda like for mkl - svm .future studies will expand on that topic .this work was supported in part by the federal ministry of economics and technology of germany ( bmwi ) under the project theseus ( fkz 01mq07018 ) , by federal ministry of education and research ( bmbf ) under the project remind ( fkz 01-is07007a ) , by the deutsche forschungsgemeinschaft ( dfg ) , and by the fp7-ict program of the european community , under the pascal2 network of excellence ( ict-216886 ) .marius kloft acknowledges a scholarship by the german academic exchange service ( daad ) .
combining information from various image features has become a standard technique in concept recognition tasks . however , the optimal way of fusing the resulting kernel functions is usually unknown in practical applications . multiple kernel learning ( mkl ) techniques allow to determine an optimal linear combination of such similarity matrices . classical approaches to mkl promote sparse mixtures . unfortunately , so - called 1-norm mkl variants are often observed to be outperformed by an unweighted sum kernel . the contribution of this paper is twofold : we apply a recently developed non - sparse mkl variant to state - of - the - art concept recognition tasks within computer vision . we provide insights on benefits and limits of non - sparse mkl and compare it against its direct competitors , the sum kernel svm and the sparse mkl . we report empirical results for the pascal voc 2009 classification and imageclef2010 photo annotation challenge data sets . about to be submitted to plos one .
blood cell production process is based upon the differentiation of so - called hematopoietic stem cells , located in the bone marrow .these undifferentiated and unobservable cells have unique capacities of differentiation ( the ability to produce cells committed to one of the three blood cell types : red blood cells , white cells or platelets ) and self - renewal ( the ability to produce cells with the same properties ) .mathematical modelling of hematopoietic stem cells dynamics has been introduced at the end of the seventies by mackey .he proposed a system of two differential equations with delay where the time delay describes the cell cycle duration . in this model ,hematopoietic stem cells are separated in proliferating and nonproliferating cells , these latter being introduced in the proliferating phase with a nonlinear rate depending only upon the nonproliferating cell population .the resulting system of delay differential equations is then uncoupled , with the nonproliferating cells equation containing the whole information about the dynamics of the hematopoietic stem cell population .the stability analysis of the model in highlighted the existence of periodic solutions , through a hopf bifurcation , describing in some cases diseases affecting blood cells , characterized by periodic oscillations .the model of mackey has been studied by many authors , mainly since the beginning of the nineties .mackey and rey numerically studied the behavior of a structured model based on the model in , stressing the existence of strange behaviors of the cell populations ( like oscillations , or chaos ) .mackey and rudnicky developed the description of blood cell dynamics through an age - maturity structured model , stressing the influence of hematopoietic stem cells on blood production .their model has been further developed by dyson et al . , adimy and pujo - menjouet , adimy and crauste and adimy et al .recently , adimy et al . studied the model proposed in taking into account that cells in cycle divide according to a density function ( usually gamma distributions play an important role in cell cycles durations ) , contrary to what has been assumed in the above - cited works , where the division has always been assumed to occur at the same time .more recently , pujo - menjouet and mackey and pujo - menjouet et al . gave a better insight into the model of mackey , highlighting the role of each parameter of the model on the appearance of oscillations and , more particularly , of periodic solutions , when the model is applied to the study of chronic myelogenous leukemia .contrary to the assumption used in all of the above - cited works , we study , in this paper , the model introduced by mackey considering that the rate of introduction in the proliferating phase , which contains the nonlinearity of this model , depends upon the total population of hematopoietic stem cells , and not only upon the nonproliferating cell population .the introduction in cell cycle is partly known to be a consequence of an activation of hematopoietic stem cells due to molecules fixing on them .hence , the entire population is in contact with these molecules and it is reasonable to think that the total number of hematopoietic stem cells plays a role in the introduction of nonproliferating cells in the proliferating phase .the first consequence is that the model is not uncoupled , and the nonproliferating cell population equation does not contain the whole information about the dynamics of blood cell production , contrary to the model in . therefore , we are lead to the study of a modified system of delay differential equations ( system ( [ eqs])([eqn2 ] ) ) , where the delay describes the cell cycle duration , with a nonlinear part depending on one of the two populations . secondly , while studying the local asymptotic stability of the steady states of our model , we have to determine roots of a characteristic equation taking the form of a first degree exponential polynomial with delay - dependent coefficients . for such equations , beretta andkuang developed a very useful and powerful technic , that we will apply to our model .our aim is to show , through the study of the steady states stability , that our model , described in ( [ eqs])([eqn2 ] ) , exhibits similar properties than the model in and that it can be used to model blood cells production dynamics with good results , in particularly when one is interested in the appearance of periodic solutions in blood cell dynamics models .we want to point out that the usually accepted assumption about the introduction rate may be limitative and that our model can display interesting dynamics , such as stability switches , that have never been noted before .the present work is organized as follows . in the next section we present our model ,stated in equations ( [ eqs ] ) and ( [ eqn2 ] ) .we then determine the steady states of this model . in section [ sceas ] ,we linearize the system ( [ eqs])([eqn2 ] ) about a steady state and we deduce the associated characteristic equation . in section [ stss ] , we establish necessary and sufficient conditions for the global asymptotic stability of the trivial steady state ( which describes the extinction of the hematopoietic stem cell population ) . in section [ spss ] ,we focus on the asymptotic stability of the unique nontrivial steady state . by studying the existence of pure imaginary roots of a first degree exponential polynomial with delay - dependent coefficients ,we obtain the existence of a critical value of the time delay for which a hopf bifurcation occurs at the positive steady state , leading to the appearance of periodic solutions . using numerical illustrations ,we show how these solutions can be related to periodic hematological diseases in section [ snum ] , and we note the existence of a stability switch .we conclude with a discussion .let consider a population of hematopoietic stem cells , located in the bone marrow .these cells actually perform a succession of cell cycles , in order to differentiate in blood cells ( white cells , red blood cells and platelets ) . according to early works , by burns and tannock for example, we assume that cells in cycle are divided in two groups : proliferating and nonproliferating cells . the respective proliferating and nonproliferating cell populationsare denoted by and .all hematopoietic stem cells die with constant rates , namely for proliferating cells and for nonproliferating cells .these latter are introduced in the proliferating phase , in order to mature and divide , with a rate . at the end of the proliferating phase , cells divide in two daughter cells which immediately enter the nonproliferating phase .then the populations and satisfy the following evolution equations ( see mackey or pujo - menjouet and mackey ) , in each of the above equations , denotes the average duration of the proliferating phase .the term then describes the survival rate of proliferating cells .the last terms in the right hand side of equations ( [ eqp ] ) and ( [ eqn ] ) account for cells that have performed a whole cell cycle and leave ( enter , respectively ) the proliferating phase ( the nonproliferating phase , respectively ) .these cells are in fact nonproliferating cells introduced in the proliferating phase a time earlier .the factor 2 in equation ( [ eqn ] ) represents the division of each proliferating hematopoietic stem cell in two daughter cells .we assume that the rate of introduction depends upon the total population of hematopoietic stem cells , that we denote by . with our notations , .this assumption stresses the fact that the nature of the trigger signal for introduction in the proliferating phase is the result of an action on the entire cell population .for example , it can be caused by molecules entering the bone marrow and fixing on hematopoietic stem cells , activating or inhibiting their proliferating capacity .this occurs in particularly for the production of red blood cells .their regulation is mainly mediated by an hormone ( a growth factor , in fact ) called erythropoietin , produced by the kidneys under a stimulation by circulating blood cells ( see blair et al . , mahaffy et al . ) .hence we assume that the function is supposed to be continuous and positive on , and strictly decreasing .this latter assumption describes the fact that the less hematopoietic stem cells in the bone marrow , the more cells introduced in the proliferative compartment .furthermore , we assume that adding equations ( [ eqp ] ) and ( [ eqn ] ) we can then deduce an equation satisfied by the total population of hematopoietic stem cells .we assume , for the sake of simplicity , that proliferating and nonproliferating cells die with the same rate , that is .then the populations and satisfy the following nonlinear system with time delay , corresponding to the cell cycle duration , from hale and verduyn lunel , for each continuous initial condition , system ( [ eqs])([eqn2 ] ) has a unique continuous solution , well - defined for .[ lempos ] for all nonnegative initial condition , the unique solution of ( [ eqs])([eqn2 ] ) is nonnegative .first assume that there exists such that and for .then , from ( [ eqn2 ] ) and since is a positive function , consequently , for .if there exists such that and for , then the same reasoning , using ( [ eqs ] ) , leads to and we deduce that for .the positivity of and , solutions of system ( [ eqs])([eqn2 ] ) , does not a priori implies that is nonnegative .using a classical variation of constant formula , the solutions of ( [ eqp ] ) are given , for , by setting the change of variable , we obtain +e^{-\delta t}\int_{t-\tau}^t e^{\delta\theta}\beta(s(\theta))n(\theta ) d\theta.\ ] ] consequently , for if that is if this condition is biologically relevant since represents the population of cells that have been introduced in the proliferating phase at time ] into .one can check that the function , defined for by satisfies .\ ] ] hence , is a lyapunov functional ( see hale and verduyn lunel ) on the set \leq 0 \right\}.\ ] ] with assumption ( [ tssprop ] ) , . in the proof of proposition [ propzero2 ] , we did not directly use the properties of lyapunov functionals , but the function is defined by where ( respectively , ) is defined by ( respectively , ) , ] is a decreasing function . using these expressions, we can stress that and are positive decreasing continuous functions of , continuously differentiable , such that , and . the characteristic equation ( [ ce2 ] ) , with and ,is then given by with ^{-\delta\tau}.\ ] ] notice that for all .moreover , from ( [ eqss ] ) , we obtain in particular , for .taking in ( [ cestar ] ) , we obtain that is since is decreasing , we deduce that the only eigenvalue of ( [ cestar ] ) is then negative . the following lemma follows .[ lemmazero ] when and , the nontrivial steady - state of system ( [ eqs])([eqn2 ] ) is locally asymptotically stable , and the system ( [ eqs])([eqn2 ] ) undergoes a transcritical bifurcation .when increases and remains in the interval , the stability of the steady state can only be lost if purely imaginary roots appear .therefore , we investigate the existence of purely imaginary roots of ( [ cestar ] ) .let , , be a pure imaginary eigenvalue of ( [ cestar ] ) .separating real and imaginary parts , we obtain one can notice , firstly , that if is a solution of ( [ eqcos])([eqsin ] ) then also satisfies this system .secondly , is not a solution of ( [ eqcos])([eqsin ] ) .otherwise , we would obtain for some , which contradicts for .therefore can not be a solution of ( [ eqcos])([eqsin ] ) .thus , in the following , we will only look for positive solutions of ( [ eqcos])([eqsin ] ) . from ( [ eqcos ] ), a necessary condition for equation ( [ cestar ] ) to have purely imaginary roots is that since , this implies in particularly that must be negative .moreover , from ( [ b ] ) , the above condition is equivalent to using the definitions of , , and equality ( [ ss1 ] ) , this inequality becomes the following condition on , [ lemmachi ] let _2 ] by then the function satisfies and , for ] and , from ( h ) , is decreasing on ] . separating real and imaginary parts in this equality we deduce we recall that is necessarily strictly negative . using ( [ eqcossin ] ) ,the above system is equivalent to &=&0 .\end{array}\ ] ] since and satisfies , from ( [ omega ] ) , , we obtain substituting the second equation in the first one , this yields , so since and , we obtain a contradiction .hence is a simple root of ( [ cestar ] ) . in the following, we do not mention the dependence of the coefficients and ( and their derivatives ) with respect to .now , from ( [ eqdelta ] ) , we obtain since , we deduce therefore , for , we obtain then , \omega(\tau_c)^2+b(1+a\tau_c)(b^{\prime}a - a^{\prime}b ) } { [ b^{\prime}a - a^{\prime}b+b\omega(\tau_c)^2 ] + [ b^{\prime}-ab]^2\omega(\tau_c)^2}.\ ] ] noticing that we get , \omega(\tau_c)^2 \\ & & \qquad\qquad\qquad\quad + b(1+a\tau_c)(b^{\prime}a - a^{\prime}b)\bigg\}. \end{array}\ ] ] since is a purely imaginary root of ( [ cestar ] ) , then , from ( [ omega ] ) , substituting this expression in ( [ eqtemp ] ) , we obtain , after simplifications \bigg\}.\ ] ] as we already noticed , if equation ( [ cestar ] ) has pure imaginary roots then necessarily .we then deduce ( [ sign ] ) and the proof is complete . using this last proposition and the previous results about the existence of purely imaginary roots of( [ ce ] ) , we can state and prove the following theorem , dealing with the asymptotic stability of .[ theohopf ] assume that ( [ condexist2 ] ) holds true and ( h ) and ( h ) are fulfilled . * if ( defined in ( [ sn ] ) ) has no root on the interval , defined in lemma [ lemmachi ] , then the positive steady state of ( [ eqs])([eqn2 ] ) is locally asymptotically stable for . *if has at least one positive root then is locally asymptotically stable for and a hopf bifurcation occurs at for if where , , and .first , from lemma [ lemmazero ] , we know that is locally asymptotically stable when . if has no positive root on the interval , then the characteristic equation ( [ ce ] ) has no pure imaginary root ( see remark [ rem1 ] and lemma [ lemmaz ] ) .consequently , the stability of can not be lost when increases .we obtain the statement in * ( i)*. now , if has at least one positive root , say , then equation ( [ ce ] ) has a pair of simple conjugate pure imaginary roots for . from ( [ tc ] ) together with proposition [ propomega ] , we have either by contradiction , we assume that there exists a branch of characteristic roots such that and for , close to .then there exists a characteristic root such that and . since is locally asymptotically stable when , applying rouch s theorem , we obtain that all characteristic roots of ( [ ce ] ) have negative real parts when , and we obtain a contradiction .thus , in this case , a hopf bifurcation occurs at when .the result stated in ( ii ) leads , through the hopf bifurcation , to the existence of periodic solutions for system ( [ eqs])([eqn2 ] ) . in the next section ,we apply the above - mentioned results of stability to a particular introduction rate and we present some numerical illustrations .we develop , in this section , numerical illustrations of the above mentioned results ( mainly the ones stated in theorem [ theohopf ] ) .let define ( see ) the introduction rate by the parameter represents the maximal rate of introduction in the proliferating phase , is the value for which attains half of its maximum value , and is the sensitivity of the rate of reintroduction .the coefficient describes the reaction of due to external stimuli , the action of a growth factor for example ( some growth factors are known to trigger the introduction of nonproliferating cells in the proliferating phase ) .then , from ( [ condexist2 ] ) , the unique positive steady state of ( [ eqs])([eqn2 ] ) exists if and only if from ( [ ss1])([ss2 ] ) , it is defined by note that the function is defined by for ] if and only if .in this case ( _ _ h__ ) is fulfilled .if , then is decreasing on the interval ] , yet is uniquely defined .note that }{(2e^{-\delta\tau}-1)\beta_0},\ ] ] with then , assuming that ( [ n ] ) holds true , we define the functions , as in ( [ sn ] ) , for by we choose the parameters according to : notice that the value of is in fact normalized and does not influence the stability of system ( [ eqs])([eqn2 ] ) since all coefficients actually do not depend on .the value of only influences the shape of the oscillations and the values of the steady states . using maple to determine the roots of , we first check that ( and consequently all functions ) is strictly negative on for .hence , from theorem [ theohopf ] , the positive steady state of ( [ eqs])([eqn2 ] ) is locally asymptotically stable for . for , pujo - menjouet et al . noticed , for the model ( [ eqp])([eqn ] ) with the introduction rate depending only upon the nonproliferating phase population , that oscillations may be observed .we choose , in keeping with values in .then , we find that one can see on figure [ z ] that has two positive roots in this case , days and days , and that is strictly negative , so all functions , with have no roots .consequently , there exist two critical values , and , for which a stability switch can occur at .( left ) and ( right ) are drawn on the interval for parameters given by ( [ par ] ) and .one can see that has exactly two roots , and , and has no root.,title="fig:",width=226,height=151 ] ( left ) and ( right ) are drawn on the interval for parameters given by ( [ par ] ) and .one can see that has exactly two roots , and , and has no root.,title="fig:",width=226,height=151 ] for , one can check that the populations are asymptotically stable on figure [ stab3p5 ] . in this case days and the solutions of ( [ eqs])([eqn2 ] ) oscillate transiently to the steady state .numerical simulations of the solutions of ( [ eqs])([eqn2 ] ) are carried out with dde23 , a matlab solver for delay differential equations .days , and the other parameters given by ( [ par ] ) with , the solutions ( dashed line ) and ( solid line ) oscillate transiently to the steady state , which is asymptotically stable .damped oscillations are observed.,width=226,height=151 ] when , one can check that so condition ( [ tc ] ) holds , and a hopf bifurcation occurs at , from theorem [ theohopf ] .this is illustrated on figure [ bif1 ] .periodic solutions with periods about days are observed at the bifurcation , and the steady state becomes unstable .days , and the other parameters given by ( [ par ] ) with , a hopf bifurcation occurs and the steady state of ( [ eqs])([eqn2 ] ) is unstable .the periodic solutions ( dashed line ) and ( solid line ) are represented in ( a ) , and we can observe the solutions in the -plane in ( b ) .periods of the oscillations are about 15 days. , title="fig:",width=226,height=151 ] days , and the other parameters given by ( [ par ] ) with , a hopf bifurcation occurs and the steady state of ( [ eqs])([eqn2 ] ) is unstable .the periodic solutions ( dashed line ) and ( solid line ) are represented in ( a ) , and we can observe the solutions in the -plane in ( b ) .periods of the oscillations are about 15 days ., title="fig:",width=226,height=151 ] when increases after the bifurcation , one can observe oscillating solutions with longer periods ( in the order of 20 to 30 days ) , as it can be seen in figure [ osc7_20 ] .days , and the other parameters given by ( [ par ] ) with , long periods oscillations are observed , with periods about 20 - 25 days .the steady state is unstable.,title="fig:",width=226,height=151 ] days , and the other parameters given by ( [ par ] ) with , long periods oscillations are observed , with periods about 20 - 25 days .the steady state is unstable.,title="fig:",width=226,height=160 ] this phenomenon has already been observed by pujo - menjouet et al .it can be related to diseases affecting blood cells , the so - called periodic hematological diseases , which are characterized by oscillations of circulating blood cell counts with long periods compared to the cell cycle duration . among the wide variety of periodic hematological diseases, we can cite chronic myelogenous leukemia , a cancer of white blood cells with periods usually falling in the range of 70 to 80 days , and cyclical neutropenia which is known to exhibit oscillations around 3 weeks of circulating neutrophils ( white cells ) , as observed on figure [ osc7_20 ] .eventually , one can note that when passes through the second critical value , stability switches and the steady state becomes stable again ( see figure [ stab9 ] ) .days , and the other parameters given by ( [ par ] ) with , damped oscillations are observed and the steady state is stable.,width=226,height=151 ]we considered a nonlinear model of blood cell dynamics in which the nonlinearity depends upon the entire hematopoietic stem cell population , contrary to the common assumption used in previous works dealing with blood cell production models .then we were lead to the study of a new nonlinear system of two differential equations with delay ( describing the cell cycle duration ) modelling the hematopoietic stem cells dynamics .we obtained the existence of two steady states for this model : a trivial one and a positive delay - dependent steady state . through sections [ stss ] and [ spss ], we performed the stability analysis of our model .we determined necessary and sufficient conditions for the global asymptotic stability of the trivial steady state of system ( [ eqs])([eqn2 ] ) , which describes the population s dying out . using an approach proposed by beretta and kuang , we analyzed a first degree exponential polynomial characteristic equation with delay - dependent coefficients in order to obtain the existence of a hopf bifurcation for the positive steady state ( see theorem [ theohopf ] ) , leading to the existence of periodic solutions .on the example presented in the previous section , we obtained long periods oscillations , which can be related to some periodic hematological diseases ( in particularly , to cyclical neutropenia ) .this result is in keeping with previous analysis of blood cell dynamics models ( as it can be found in ) .periodic hematological diseases are particular diseases mostly originated from the hematopoietic stem cell compartment .the appearance of periodic solutions in our model with periods that can be related to the ones observed in some periodic hematological diseases stresses the interesting properties displayed by our model .periods of oscillating solutions can for example be used to determine the length of cell cycles in hematopoietic stem cell populations that can not be directly determined experimentally .moreover , stability switches have been observed , due to the structure of the equations ( nonlinear equations with delay - dependent coefficients ) .such a behavior had been noted in previous works dealing with blood cell production models ( see ) , but it had never been mathematically explained .we can note that our assumption that proliferating and nonproliferating cells die with the same rate may be too limitative , since pujo - menjouet et al . already noticed that the apoptotic rate ( the proliferating phase mortality rate ) plays an important role in the appearance of oscillating solutions .however , by assuming that the two populations die with different rates , we are lead to a second order exponential polynomial characteristic equation , and the calculations are more difficult than the ones carried out in the present work . we let it for further analysis .c. m. booth , l. m. matukas , g. a. tomlinson , a. r. rachlis , d. b. rose , h. a. dwosh , et al ., clinical features and short - term outcomes of 144 patients with sars in the greater toronto area .jama 289(2003 ) 2801 - 10 .mackey , dynamic hematological disorders of stem cell origin , in j.g .vassileva - popova and e.v .jensen ( eds ) , biophysical and biochemical information transfer in recognition , plenum press , new - york , 941956 , 1979 .
we analyze the asymptotic stability of a nonlinear system of two differential equations with delay describing the dynamics of blood cell production . this process takes place in the bone marrow where stem cells differentiate throughout divisions in blood cells . taking into account an explicit role of the total population of hematopoietic stem cells on the introduction of cells in cycle , we are lead to study a characteristic equation with delay - dependent coefficients . we determine a necessary and sufficient condition for the global stability of the first steady state of our model , which describes the population s dying out , and we obtain the existence of a hopf bifurcation for the only nontrivial positive steady state , leading to the existence of periodic solutions . these latter are related to dynamical diseases affecting blood cells known for their cyclic nature . _ laboratoire de mathmatiques appliques , umr 5142 , _ + _ universit de pau et des pays de ladour , _ + _ avenue de luniversit , 64000 pau , france . _ + _ anubis project , inria futurs _ + _ e - mail : fabien.crauste-pau.fr_ + _ keywords : _ asymptotic stability , delay differential equations , characteristic equation , delay - dependent coefficients , hopf bifurcation , blood cell model , stem cells .
plants , diverse species of algae , and other organisms acquire chemical energy in the form of carbohydrates ( sugars ) from the sun through photosynthesis . these photoassimilates are exported from plant leaves via the phloem vascular system to support life in distal parts of the organism .the flow is driven by a build - up of osmotic pressure in the veins , where high concentrations of sugar direct a bulk flow of sweet sap out of the leaf .the vascular network is critical to sustenance and growth ; more subtly , it is also of importance as carrier of signaling molecules that integrate disparate sources of information across the organism .the mechanisms that influence phloem structure to fulfill these multiple objectives , however , remain poorly understood .the phloem is a complex distribution system responsible for transporting a large number of organic molecules , defensive compounds , and developmental signals by bulk fluid flow through a network of enlongated sieve element cells which are connected to each other by porous sieve plates , effectively forming long tubes .the phloem thus serves functions analogous to a combination of the nervous- and circulatory systems of animals .the role of the phloem in the transport of photoassimlates has been known since the 17th century , but it was not until 1930 that the role of the phloem sieve element as the channel of carbohydrate transport in plants was experimentally demonstrated .although the primary role of phloem transport is to distribute the products of photosynthesis , it also plays a role in long - distance transmission of signals for some developmental and environmental responses .for instance , flowering is induced by transmission of a phloem - mobile hormone from the leaves to the meristem .also , pathogen protection and related gene expression signals have been shown to occur through the phloem .phloem flow occurs in an interconnected network of long , narrow cylindrical cells . in these cells , an energy - rich solution of sap containing wt sugars flows toward distal regions of the plant .transport is driven by the osmotic mnch pump .the flow is initiated in the leaf , where sugars produced by photosynthesis accumulate in phloem cells .this induces an osmotic gradient with respect to the surrounding tissue , drawing in water into the phloem cells . on the scale of the phloem tissue , this process results in a bulk flow of sap along the major veins out of the leaf , towards regions of low osmotic potential in the plant , such as roots or fruits .mnch flow in a conifer needle is sketched in fig .[ fig : abies - cross - section ] , where the phloem tissue is located within a single large vein near the center of the leaf cross - section plane .close to the tip of the needle only few sieve tubes exist to support the flow of sap , more continually being added to the bundle as one moves closer to the petiole .neighboring sieve tubes maintain hydraulic connections through plasmodesmata .the driving force responsible for carbon export is the steady production of photosynthate in the leaf mesophyll located close to the leaf surface .the mechanism by which sugars accumulate in the phloem varies between species .most plants , however , can be roughly divided into two groups : active and passive phloem loaders .active loaders use membrane transporters or sugar polymerization to accrue carbon in the phloem , while passive loaders rely on cell - to - cell diffusion aided by bulk flow through plasmodesmata pores .trees are predominantly passive loaders , while many herbaceous plants use active phloem loading . the quantity of material exported through the phloem is generally assumed to be strongly dependent on physiological factors such as solar radiation intensity and water availability , but it also likely to depend on details of the leaf vascular architecture .for instance , the positioning and network structure of water - transporting xylem conduits in plant stems and leaves has been shown to play an important role in determining the efficiency of co uptake . while the branched vein network architecture in plant leaves has been studied extensively , less is known about the functional elements . except for a few species of grasses , which have a parallel vein geometry similar to needles ,the detailed functional architecture of the phloem ( i.e. the location , number , and size of conducting elements ) in plant leaves remains unknown , in part due to the high sensitivity of phloem tissue to disturbances . in this work, we aim to answer two basic questions .first , we ask what is the design of the phloem vascular system in conifer needles , i.e. what are its geometric and hydrodynamic properties .second , we aim to determine whether the observed structure is consistent with energy minimization or maximizing flow rate to elucidate the selective force that influences phloem structure .we chose conifer needles for this study in part due to their linear structure without branching veins , a geometric feature which greatly simplifies the analysis of the transport process .moreover , conifer trees inhabit diverse environments and the vascular network has thus been subjected to a broad selective pressure . accordingly , we present the experimental results of phloem geometry in needles of four conifer species from a diverse set of habitats and needle sizes . based on these , we develop a minimal mathematical model of sugar transport in leaves , and use a constrained optimization to derive the optimal phloem geometry in a one - dimensional leaf .finally , we compare modeling with experimental results , and conclude by discussing the implications of our results for the study of conifers and plants in general .we measured the phloem geometry in needles of four conifer species shown in fig .[ fig : needles ] : _ abies nordmanniana _ , _ pinus palustris _ , _ pinus cembra _ , and _picea omorika_. three to six needles were sampled from each species .the species encompass the range of typical needle sizes of conifer species , from _ p. omorika _( needle length ) to _ p. palustris _ ( ) . additionally , they incorporated plants from diverse habitats and climates ranging from _ p. palustris _ ,whose habitat are the gulf and atlantic coastal plains of the united states , to the european alpine _p. cembra_. the measurements were conducted by performing transverse sections at 10 - 20 positions along the length of the needle .phloem cells were identified by the presence of a stain as described in the materials and methods section . a typical stack of images obtained this way is shown in fig . [ fig : abies - cross - section ] ( c ) .to quantify the phloem structure we measured the size of all sieve elements in each cross section .starting from the tip of the needle , we typically observed an increase in the total conductive area towards the base of the leaf ( fig .[ fig : master - a - linlin ] ( a ) ) .the cross - sectional area of individual sieve elements ( fig . [ fig : master - a - linlin ] ( b ) ) , however , shows only minimal variation as a function of length ( correlation to position using _ pearson s _ for all needles in fig .[ fig : master - a - linlin ] ( b ) ) , implying that the main variation in transport area is driven by changes in the number of conduits . when the total phloem transport area is normalized and plotted relative to the needle length on logarithmic axes ( fig .[ fig : master - a - loglog ] ) , it is seen to behave roughly as a power law with average exponents per species between ( _ p .omorika _ ) and ( _ a .nordmanniana _ ) , see table 1 .the number of sieve elements follows a similar scaling with .since the cross - sectional area of individual sieve elements is nearly constant , this result is to be expected . to rationalize the observed vein structure ,we develop a simple model of one - dimensional sugar transport in a bundle of parallel phloem tubes based on the work of horwitz and thompson and holbrook .sugar flow commences near the needle tip ( ) , where a few phloem conduits initiate the export of photoassimilates . approaching the needle base ( ) ,the number of conducting channels increases while the size of individual phloem tubes remains constant . because the length scale at which varies is small compared to the total length of the needle , we can approximate well by a smooth function .we note that the precise way in which the number of sieve tubes changes ( be it by simple addition of new tubes or branching of existing ones ) has no impact on the continuum description .phloem loading in conifers is thought to be passive , driven by cell - to - cell diffusion across microscopic ( plasmodesmata ) channels .the sugar loading rate per unit length of needle is proportional to the rate of photosynthesis and to the circumference of the needle , both of which are approximately constant along the needle ( see fig .[ fig : needles ] ) . for a collection of parallel phloem tubes , conservation of sugar mass can be expressed as where is sugar current with the total volume flow rate and the sap sugar concentration .we further assume that at each point water enters the sieve elements by osmosis where is the total conductive phloem area at the position ( see fig .[ fig : abies - cross - section ] for visualization ) .note that , where is the number of sieve tubes at position and is the cross sectional area of a single sieve tube . in eq ., denotes the permeability of the sieve element membrane , is the universal gas constant , is the absolute temperature , and is the radius of one single sieve element .the sugar concentration available for driving an osmotic flow is the difference between the concentration in the sieve element and the constant osmotic concentration of the surrounding cells .likewise , the pressure is the difference between the cytoplasmic pressure and the constant pressure in neighboring cells .for clarity we use the vant hoff value for the osmotic pressure in eq . , which is valid only for dilute ( ideal ) solutions .at the concentrations relevant to phloem sap ( m ) , the error in the osmotic pressure introduced by using the vant hoff value is .equation may be integrated to yield , having imposed a vanishing current at the tip .the total export of sugar from the needle is therefore , proportional to the loading rate and needle length .the factors contributing to the energetic cost of transport include the metabolic energy required to maintain the vasculature and the power dissipated by the flow due to viscous friction .we proceed to consider how the phloem structure influences the magnitude of these contributions , and note that similar energy considerations have been used in the study of other biological transport systems .for instance , zwieniecki and co - workers derived the optimal distribution of tracheids in a pine needle that minimizes the pressure drop required to drive transpiration for a given investment in xylem conduit volume .the phloem consists of severely reduced cells which shed most of their organelles during maturation .it is , however , alive and relies on external supply of metabolic energy .the rate of energy consumption by the phloem tissue itself may be seen as an energetic maintenance cost of the transport conduit . here , we assume that this energetic cost of maintaining the phloem vasculature is proportional to the conductive volume , or equivalently the number of phloem cells ( assuming cells of roughly equal size ) .the viscous power dissipation per unit length is . to determine , we note that the local pressure gradient is related to the flow speed by darcy s law where is the pressure , is the viscosity of phloem sap ( typically around 5 times that of water ) and is a geometric constant which solely depends on the cross sectional area of single sieve elements . integrating eq .leads to , and thus the local pressure gradient is given by analysis of the coupled system in eqns . andhave shown that the concentration is approximately constant along the needle ( see also the supplementary information ) . integrating the differential relation for using the above set of approximations ,the total power dissipation is for a given conductive phloem volume , we can now determine the area distribution which minimizes the viscous power dissipation .using the method of lagrange multipliers and the calculus of variations , one finds that the distribution which minimizes under the constraint is where is the average total area of sieve elements . assuming a bundle of sieve elements with constant cross sectional area , this result may be translated immediately to total number of sieve elements by where is the cross sectional area of a single sieve element . from eqns . and, we conclude that a scaling of phloem sieve element number or area with the power minimizes the viscous power dissipation .the observed scaling exponents ( see fig . [fig : master - a - loglog ] and table 1 ) are close to these values , suggesting that the sieve element areas roughly follow the theoretical optimum .we note that the square - root scaling in eq . for the tapering of pine needle xylem conduits .the coupling between water flow and phloem loading required to maintain a constant photosynthetic rate along the needle is thus responsible for driving this remarkable convergence in vascular architecture . in the context of leaf development , we note that pine needles grow from a meristem located at the base of the needle which gradually propels the tip away from the growth zone . newly formed tissue at the base of the needle gradually becomes mature and loses its ability to change its structure as the needle extends .the distribution of phloem conduits along the needle length thus appears to be either predetermined or rely on exchange of information between the tip of the needle and the meristem . while our model is not generally applicable to complex reticulate or anastomosing vein networks , we expect it to be suitable for analysis of leaves with parallel veins , given that the assumption of constant sieve element properties is valid .evidence to support this hypothesis is found in studies by r. f. evert and co workers , who observed similar trends in grasses .for example , phloem area in barley , maize , and sugarcane roughly follow the law , ( fig .[ fig : master - a - loglog ] inset ) , suggesting that the energy dissipation criterion leading to the prediction is broadly applicable .we show in the appendix that the constant volume constraint imposed when obtaining can be relaxed , and that sub - linear scalings is a general feature of the energy minimization principle .previous works , which focused on one - dimensional models of flow in sieve elements , identified the transport velocity ( phloem sap flux density ) as an important physiological parameter .in fact , optimizing the sieve element radius ( in eqn . [eqn : net - currents ] ) for maximum flux at the whole plant level results in predictions that are in agreement with experimental observations .interestingly , we find ( see supplementary information ) that while the observed conduit distribution ( i.e. ) minimizes the energetic cost of transport for a fixed tube radius , it does not maximize the average flux density .the size of individual phloem cells at the level of the whole tree thus appears to be optimized for flux density , while the arrangement of tubes in the needle minimizes the energetic cost of transport , working in concert to produce an efficient system of nutrient translocation .in this work , we studied the physical properties of nutrient transport in the phloem of conifer needles .we measured the geometrical properties of needle phloem in several conifer species , varying over one order of magnitude in length , and found that their cross sectional area distribution roughly follows the law .we presented a simple mathematical model which is able to rationalize the observed needle tube geometry by means of minimization of the energy dissipated during flow .expenditure of energy is unavoidable since although the transport is entirely passive by virtue of the osmotic flow process , the plant is forced to maintain an osmotic gradient , consuming energy in the process .we found that experimental data from several species of conifers agree well with the theoretically derived law of area distribution .simple models such as the ones considered in this work may not only elucidate the properties of structures with modest complexity we see in the living world , but also serve as an important stepping stone to further understanding of more complicated systems .the basic underlying constraints and functional requirements that dictate needle design in conifers are not unique to this group of plants .data from parallel - veined grasses indicate similar trends , and the design requirements are expected to hold for plants with reticulate venation patterns .the same mathematical model can potentially be extended to predict vascular distribution when the leaf lamina is broad and the single vein is replaced by an extensive reticulate network .the conifer needle belongs to a general class of network systems that follow a principle of energy dissipation minimization .other important members of this class which is not constrained to one - dimensional or even planar systems include the networks of blood vessels in animals , the xylem vascular system in plants and even river basin networks in geomorphology , thus establishing the importance of optimization considerations . finally , we point out that in recent years the constructal law , stipulating that all living organisms are built so as to optimally facilitate flow ( of fluids , stresses , energy ) has enjoyed some success in explaining the structure and apparent design of biological systems .the findings we report in this paper appear to be in accordance with the basic ideas from constructal theory .needles of mature _ abies nordmanniana , pinus palustris , pinus cembra _ , and _ picea omorika _ were collected in may and june of 2013 .samples of a. nordmanniana , p. cembra and p. omorika were taken in denmark , while _p. palustris _ needles were collected in florida ( usa ) and shipped to denmark by courier .needles were embedded in low - melting point agarose ( sigma - aldrich ) and sectioned with a vibrating blade microtome ( leica microsystems ) to ensure uniform section thickness of 100 .sections were imaged using a confocal laser scanning microscope ( sp5x , leica microsystems ) . in this way , 3 to 6 needles of average length ( see figs . [fig : abies - cross - section ] and [ fig : needles ] ) from each species were analyzed .the number of sieve elements and their cross - sectional area were quantified using the image analysis software volocity ( version 5.3 , perkinelmer ) . by way of fluorescence staining with the live - cell marker carboxy fluorescein diacetate ( sigma - aldrich ) non - functional sieve elementswere excluded .needle sections of length were incubated in carboxy fluorescein for 15 minutes after which sections were made and analyzed under the microscope .for all species , only a few strongly deformed sieve elements at the abaxial side of the phloem bundle in which almost no cytoplasm was visible were found to be dead , i.e. non - functional .these cells were not included in the analysis .we consider a extension of the constant volume constraint by introducing a more general dependency on some power of the total area , where we now think of as a general cost of building material and metabolism which scales with cross - sectional area in a nonlinear way .constraints of this type have been used extensively in the field of complex distribution networks .the optimization of eq. under this generalized constraint predicts a scaling of we note that this result is robust : the optimal area scaling is sub - linear , whatever the value of the scaling power of the cost function .the work of kaare h. jensen is supported by the air force office of scientific research ( award no . :fa9550 - 09 - 1 - 0188 ) , the national science fundation ( grant no . :dmr-0820484 ) , the danish council for independent research natural sciences , and the carlsberg foundation .the work of henrik ronellenfitsch is supported by the imprs for physics of biological and complex systems , gttingen .eleni katifori acknowledges the support of the burroughs wellcome fund through the bwf career award at the scientific interface .the phloem geometry data will available digitally at at the time of publication .knoblauch m. , peters w. s. 2010 mnch , morphology , microfluidics - our structural problem with the phloem ._ plant , cell & environment _ 33(9 ) : 143952 .taiz l. , zeiger e. 2010 plant physiology .( sinauer associates , sunderland , ma ) .schumacher w. 1930 untersuchungen ber die lokalisation der stoffwanderung in den leitbndeln hherer pflanzen ._ jahrb . wiss ._ 73 : 770823. mason t. g. , phillis e. 1937 the migration of solutes . _ the botanical review _iii(2 ) : 4771 . crafts a. s. , crisp c. e. 1971 phloem transport in plants .( w. h. freeman & co ltd , san francisco ) .lough t. j. , lucas w. j. 2006 integrative plant biology : role of phloem long - distance macromolecular trafficking ._ annual review of plant biology _ 57 : 20332 .mnch e. 1930 _ die stoffbewegungen in der pflanze_. ( verlag von gustav fischer , jena ) .horwitz l. 1958 some simplified mathematical treatments of translocation in plants ._ plant physiology _33(2 ) : 8193 .thompson m. v. , holbrook n. 2003 application of a single - solute non - steady - state phloem model to the study of long - distance assimilate transport ._ journal of theoretical biology _220(4 ) : 419455 . pickard w. f. , abraham - shrauner b. 2009 a ` simplest ' steady - state munch - like model of phloem translocation , with source and pathway and sink ._ functional plant biology _36(7 ) : 629644 . jensen k. h. , lee j. , bohr t. , bruus h. , holbrook n. m. , zwieniecki m. a. 2011 optimality of the mnch mechanism for translocation of sugars in plants . _ journal of the royal society , interface _8(61 ) : 115565 . jensen k. h. , liesche j. , bohr t. , schulz a. 2012 universality of phloem transport in seed plants ._ plant , cell & environment _ 35(6 ) : 10651076 .turgeon r. 2006 phloem loading : how leaves gain their independence ._ american institute of biological sciences _56(1 ) : 1524 . turgeon r. 2010 the role of phloem loading reconsidered ._ plant physiology _152(4 ) : 181723 .rennie e. a. , turgeon r. 2009 a comprehensive picture of phloem loading strategies ._ proceedings of the national academy of sciences of the united states of america _106(33 ) : 1416214167 .liesche j. , martens h. j. , schulz a. 2011 symplasmic transport and phloem loading in gymnosperm leaves ._ protoplasma _248(1 ) : 181190 . sevanto s. 2014 phloem transport and drought ._ journal of experimental botany _65(7 ) : 17511759 .choat b. , jansen s. , brodribb t. j. , cochard h. , delzon s. , bhaskar r. , bucci s. j. , feild t. s. , gleason s. m. , hacke u. g. , _ et al ._ 2012 global convergence in the vulnerability of forests to drought .491(7426 ) : 752755 .west g. b. 1999 the fourth dimension of life : fractal geometry and allometric scaling of organisms ._ science _284(5420 ) : 16771679 .savage v. m. , bentley l. p. , enquist b. j. , sperry j. s. , smith d. d. , reich p. b. , framework t. 2010 hydraulic trade - offs and space fi lling enable better predictions of vascular structure and function in plants . _proceedings of the national academy of sciences _107(52 ) : 2272222727 .katifori e. , szllsi g. j. , magnasco m. o. 2010 damage and fluctuations induce loops in optimal transport networks . _ physical review letters _104(4 ) : 048704 .corson f. 2010 fluctuations and redundancy in optimal transport networks ._ physical review letters _ 104(4 ) : 048703 .zwieniecki m. a. , stone h. a. , leigh a. , boyce c. k. , holbrook n. m. 2006 hydraulic design of pine needles : one - dimensional optimization for single - vein leaves ._ plant , cell and environment _ 29(5 ) : 803809 .noblin x. , mahadevan l. , coomaraswamy i. a. , weitz d. a. , holbrook n. m. , zwieniecki m. a. 2008 optimal vein density in artificial and real leaves ._ proceedings of the national academy of sciences of the united states of america _105(27 ) : 91404 .colbert j. t. , evert r. f. 1982 leaf vasculature in sugarcane ( saccharum officinarum l. ) ._ planta _ 156 : 136151 .russell s. , evert r. f. 1985 leaf vasculature in zea mays l. _ planta _ 164 : 448458 .dannenhoffer j. m. , ebert jr .w. , evert r. f. 1990 leaf vasculature in barley , hordeum vulgare ( poaceae ) ._ american journal of botany _77(5 ) : 636652 .knoblauch m. , froelich d. r. , pickard w. f. , peters w. s. 2014 seorious business : structural proteins in sieve tubes and their involvement in sieve element occlusion . _ journal of experimental botany _65(7 ) : 7993 . liesche j. , martens h. , schulz a. 2011 symplasmic transport and phloem loading in gymnosperm leaves ._ protoplasma _248(1 ) : 181190 . cath t. , childress a. , elimelech m. 2006 forward osmosis : principles , applications , and recent developments ._ journal of membrane science _ 281(1 - 2 ) : 7087 .katifori e. , magnasco m. o. 2012 quantifying loopy network architectures ._ plos one _ 7(6 ) : e37994 .murray c. d. 1926 the physiological principle of minimum work : i. the vascular system and the cost of blood volume ._ proceedings of the national academy of sciences _12(3 ) : 207214 . murray c. d. 1926 the physiological principle of minimum work : ii .oxygen exchange in capillaries ._ proceedings of the national academy of sciences _12(5 ) : 299304 . jensen k. h. , zwieniecki m. a. 2013 physical limits to leaf size in tall trees ._ 110 : 018104 .mcculloh k. a. , sperry j. s. , adler f. r. 2003 water transport in plants obeys murray s law . _421(6926 ) : 939942 .rinaldo a. , rodriguez - iturbe i. , rigon r. , bras r. l. , ijjasz - vasquez e. , marani a. 1992 minimum energy and fractal structures of drainage networks ._ water resources research _28(9 ) : 21832195 .bernot m. , caselles v. , morel j. m. 2009 _ optimal transportation networks : models and theory ._ ( springer , berlin , heidelberg ) .bejan a. , lorente s. 2011 the constructal law and the evolution of design in nature ._ physics of life reviews _8(3 ) : 209240 .bejan a. , lorente s. , lee j. 2008 unifying constructal theory of tree roots , canopies and forests ._ journal of theoretical biology _254(3 ) : 529540 . ronellenfitsch h. , liesche j. , jensen k. , holbrook n. , schulz a. , katifori e. data from : scaling of phloem structure and optimality of photoassimilate transport in conifer needles ._ dryad digital repository ._ doi:10.5061/dryad.024bn species & scaling exponent & & value pinus palustris & & & & & & & & & & & & average & & & abies nordmanniana & & & & & & & & & & & & & & & & & & & & & average & & & pinus cembra & & & & & & & & & & & & average & & & picea omorika & & & & & & & & & & & & average & & & along the needle ( purple dotted arrows ) .( c ) micrograph cross - sections of the phloem of an _ abies nordmanniana _ needle taken at distances from the tip .the diameter of the circular cross - sections is .the conductive phloem area ( red cells ) increases with distance from the needle tip while the size of individual cells is roughly constant .[ fig : abies - cross - section ] ] , shown in panel ( a ) for all analyzed needles , and cross - sectional area of individual sieve elements , shown in panel ( b ) for one needle of each species , as a function of distance from the needle tip for the conifer species indicated in the legend ( see also fig . [fig : needles ] ) .error bars in ( b ) correspond to one standard deviation .the individual sieve element areas show only little variation as a function of position , as evidenced from _pearson s _ for all needles shown , implying that it is mainly the number of sieve elements which contributes to hydraulic efficiency .data from individual needles are connected by solid lines .insets show details near for the shorter species .[ fig : master - a - linlin],title="fig : " ] + , shown in panel ( a ) for all analyzed needles , and cross - sectional area of individual sieve elements , shown in panel ( b ) for one needle of each species , as a function of distance from the needle tip for the conifer species indicated in the legend ( see also fig . [fig : needles ] ) .error bars in ( b ) correspond to one standard deviation .the individual sieve element areas show only little variation as a function of position , as evidenced from _pearson s _ for all needles shown , implying that it is mainly the number of sieve elements which contributes to hydraulic efficiency .data from individual needles are connected by solid lines .insets show details near for the shorter species .[ fig : master - a - linlin],title="fig : " ] law .phloem area plotted as a function of distance from the needle tip on double - logarithmic axes .the dashed line has slope .the colored regions correspond to all power laws whose power dissipation exceeds that of the optimal solution by the given percentages .most of the needles analyzed fall within the range $ ] , see table 1 .the inset shows phloem area data from the monocots barley , maize , and sugarcane obtained from .barley is in good accord with the scaling , while maize and sugarcane also only show approximate sub - linear dependences on distance from leaf tip .[ fig : master - a - loglog ] ]
the phloem vascular system facilitates transport of energy - rich sugar and signaling molecules in plants , thus permitting long range communication within the organism and growth of non - photosynthesizing organs such as roots and fruits . the flow is driven by osmotic pressure , generated by differences in sugar concentration between distal parts of the plant . the phloem is an intricate distribution system , and many questions about its regulation and structural diversity remain unanswered . here , we investigate the phloem structure in the simplest possible geometry : a linear leaf , found , for example , in the needles of conifer trees . we measure the phloem structure in four tree species representing a diverse set of habitats and needle sizes , from 1 cm ( _ picea omorika _ ) to 35 cm ( _ pinus palustris _ ) . we show that the phloem shares common traits across these four species and find that the size of its conductive elements obeys a power law . we present a minimal model that accounts for these common traits and takes into account the transport strategy and natural constraints . this minimal model predicts a power law phloem distribution consistent with transport energy minimization , suggesting that energetics are more important than translocation speed at the leaf level . * key words : * phloem structure , photoassimilate transport , optimization , mathematical modelling +
the new interface cement equilibrated mortar ( nicem ) method proposed in is an equilibrated mortar domain decomposition method that allows for the use of optimized schwarz algorithms with robin interface conditions on non - conforming grids .it has been analyzed in in 2d and 3d for elements .the purpose of this paper is to extend this numerical analysis in 2d for piecewise polynomials of higher order .we thus establish new numerical analysis results in the frame of finite element approximation and also present the iterative algorithm and prove its convergence in all these cases .we first consider the problem at the continuous level : find such that where and are partial differential equations .the original schwarz algorithm is based on a decomposition of the domain into overlapping subdomains and the resolution of dirichlet boundary value problems in each subdomain .it has been proposed in to use more general interface / boundary conditions for the problems on the subdomains in order to use a non - overlapping decomposition of the domain .the convergence factor is also dramatically reduced .more precisely , let be a ( or convex polygon in 2d or polyhedron in 3d ) domain of , or ; we assume it is decomposed into non - overlapping subdomains : we suppose that the subdomains are either or polygons in 2d or polyhedrons in 3d .we assume also that this decomposition is geometrically conforming in the sense that the intersection of the closure of two different subdomains , if not empty , is either a common vertex , a common edge , or a common face of the subdomains in 3d .let be the outward normal from .let be the chosen transmission conditions on the interface between subdomains and ( e.g. ) .what we shall call here a schwarz type method for the problem ( [ eq : pbgen])-([eq : pbgen2 ] ) is its reformulation : find such that leading to the iterative procedure the convergence factor of associated schwarz - type domain decomposition methods depends largely on the choice of the transmission operators ( see for instance and ) . more precisely , transmission conditions which reduce dramatically the convergence factor of the algorithm have been proposed ( see ) for a convection - diffusion equation , where coefficients in second order transmission conditions where optimized . on the other hand , the mortar element method , first introduced in ,enables the use of non - conforming grids , and thus parallel generation of meshes , local adaptive meshes and fast and independent solvers .it is also well suited to the use of `` dirichlet - neumann '' ( ) , or `` neumann - neumann '' preconditioned conjugate gradient method applied to the schur complement matrix . in ,a new cement to match robin interface conditions with non - conforming grids in the case of a finite volume discretization was introduced and analyzed .such an approach has been extended to a finite element discretization in .a variant has been independently implemented in for the maxwell equations , without numerical analysis .another approach , in the finite volume case , has been proposed in .the numerical analysis of the nicem method proposed in is done in for finite elements , in 2d and 3d .these results are for interface conditions of order 0 ( i.e. ) and are the prerequisites for the goal in designing this non - overlapping method for interface conditions such as ventcel interface conditions which greatly enhance the information exchange between subdomains , see for preliminary results on the extension of the nicem method to ventcel conditions .the purpose of this paper is first to present a general finite element nicem method in the case of finite elements , with in 2d and in 3d .we also provide a robin iterative algorithm and prove its convergence .then , we present in full details the error analysis in the case of piecewise polynomials of high order in 2d . in section [ sec.defmethod ] , we describe the nicem method in 2d and 3d. then , in section [ sec.algo ] , we present the iterative algorithm at the continuous and discrete levels , and we prove , in both cases , the well - posedness and convergence of the iterative method , for polynomials of low and high order in 2d , and for finite elements in 3d . the convergence is also proven in 3d for finite elements , , in a weak sense . in section [ sec.bestfit2d ]we extend the error estimates analysis given in to 2d piecewise polynomials of higher order .we finally present in section [ sec : numresults ] simulations for two and four subdomains , that fit the theoretical estimates .we consider the following problem : find such that [ initial_bvp1 ] ( i d - ) u & = & f + [ initial_bvp2 ] u & = & 0 , where is given in .+ the variational statement of the problem ( [ initial_bvp1])-([initial_bvp2 ] ) consists in writing the problem as follows : find such that [ initial_vf ] _( u v + uv ) dx = _ fvdx , v h^1_0 ( ) .we introduce the space defined by and we introduce the interface of two adjacent subdomains , it is standard to note that the space can then be identified with the subspace of the -tuple that are continuous on the interfaces : v = \{v=(v_1, ... ,v_k ) _ k=1^k h^1_*(^k ) , k , , k , 1 k , k , v_k = v _ ^k , } . following , in order to glue non - conforming grids with robin transmission conditions , we impose the constraint over through a lagrange multiplier in . the constrained space is then defined as follows [ eq : constrainedspace ] v = ( v , q)(_k=1^k h^1_*(^k))(_k=1^k h^-1/2(^k ) ) , + v_k = v_q_k = - q_^k , , k ,. then , problem ( [ initial_vf ] ) is equivalent to the following one ( see ) : find such that being equivalent with the original problem , where over , this problem is well posed .this can also be directly derived from the proof of an inf - sup condition that follows from the arguments developed hereafter for the analysis of the iterative procedure .note that the dirichlet - neumann condition in is equivalent to the following combined equality as noticed in , for regular enough function it is also equivalent to which is the form under which the discrete method is described .let us describe the method in the non - conforming discrete case .we introduce now the discrete spaces for piecewise polynomials of higher order in 2d .each is provided with its own mesh , such that ^k=_t _ h^k t. for ,let be the diameter of ( ) and the discretization parameter with as noticed in , for the sake of readability we prefer to use instead of , but all the analysis could be performed with instead of .let be the diameter of the circle ( in 2d ) or sphere ( in 3d ) inscribed in , then is a measure of the non - degeneracy of .we suppose that is uniformly regular : there exists and independent of such that we consider that the sets belonging to the meshes are of simplicial type ( triangles ) , but the analysis made hereafter can be applied as well for quadrangular meshes .let denote the space of all polynomials defined over of total degree less than or equal to .the finite elements are of lagrangian type , of class .we define over each subdomain two conforming spaces and by : y_h^k&=&\{v_h , k ^0(^k ) , v_h , k_|t _p(t ) , t _h^k } , + x_h^k&=&\{v_h , k y_h^k , v_h , k_|^k = 0}.in what follows we assume that the mesh is designed by taking into account the geometry of the in the sense that , the space of traces over each of elements of is a finite element space denoted by .let be given , the space is then the product space of the over each such that . with each such interfacewe associate a subspace of in the same spirit as in the mortar element method in 2d or and in 3d . to be more specific , in 2dif the space consists of continuous piecewise polynomials of degree , then it is readily noticed that the restriction of to consists in finite element functions adapted to the ( possibly curved ) side of piecewise polynomials of degree .this side has two end points that we denote as and that belong to the set of vertices of the corresponding triangulation of : .the space is then the subspace of those elements of that are polynomials of degree over both ] .as before , the space is the product space of the over each such that .let be a given positive real number .following , the discrete constrained space is defined as and the discrete problem is the following one : find such that + + [ pbdiscret ] _k=1^k _ ^k ( u_h , kv_h , k + u_h , k v_h , k ) dx - _ k=1^k _ ^k p_h , k v_h , k ds = _ k=1^k _ ^k f_k v_h , k dx .the robin condition is the discrete counterpart of .let us describe the algorithm in the continuous case , and then in the non conforming discrete case . in both cases , we prove the convergence of the algorithm towards the solution of the problem .let us consider the robin interface conditions .we introduce the following notations : and .the algorithm is then defined as follows : let be an approximation of in at step . then , is the solution in of [ algo_continu ] _ ^k ( u_k^n+1v_k + u_k^n+1v_k ) dx - p_k^n+1,v_k_^k = _ ^k f_kv_kdx , v_k h^1_*(^k ) , + [ ci_continu ] < p_k^n+1 + u_k^n+1,v_k>_^k,= < - p_^n + u_^n , v_k>_^k , , v_k h_00 ^ 1/2(^k , ) .it is obvious to remark that this series of equations results in uncoupled problems set on every . recalling that , the strong formulation is indeed that -u_k^n+1 + u_k^n+1 & = & f_k ^k + + u_k^n+1 & = & -p_^n+u_^n ^k , + [ flux_fort ] p_k^n+1 & = & u_k^n+1*n*_k ^k . from this strong formulationit is straightforward to derive by induction that if each , is chosen in , then , for each , , and the solution belongs to and belongs to by standard trace results ( ) .this regularity assumption on will be done hereafter .we can prove now that the algorithm ( [ algo_continu])-([ci_continu ] ) converges for all : assume that is in and .then , the algorithm ( [ algo_continu])-([ci_continu ] ) converges in the sense that _n ( u_k^n - u_k_h^1(^k ) + p_k^n - p_k_h^-1/2(^k ) ) = 0 , 1kk , where is the restriction to of the solution to ( [ initial_bvp1])-([initial_bvp2 ] ) , and over , .+ + * proof*. as the equations are linear , we can take .we prove the convergence in the sense that the associated sequence satisfies _n ( u_k^n_h^1(^k ) + p_k^n_h^-1/2(^k ) ) = 0 , 1kk .we proceed as in by using an energy estimate that we derive by taking in and the use of the regularity property that _ ^k ( |u_k^n+1|^2 + |u_k^n+1|^2 ) dx = _ ^k p_k^n+1u_k^n+1 ds that can also be written _ ^k ( p_k^n+1-u_k^n+1)^2)ds . by using the interface conditions ( [ ci_continu ] ) we obtain [ estim_en ] _^k ( |u_k^n+1|^2 + |u_k^n+1|^2 ) dx + _ _ ^k , ( p_k^n+1-u_k^n+1)^2ds + = _ _ ^k , ( - p_^n+u_^n)^2ds .let us now introduce two quantities defined at each step by : e^n=_k=1^k _ ^k ( |u_k^n|^2 + |u_k^n|^2 ) b^n = _ k=1^k_k _ ^k , ( p_k^n - u_k^n)^2ds . by summing up the estimates ( [ estim_en ] ) over , we have , so that , by summing up these inequalities , now over , we obtain : _n=1^ e^n b^0 .we thus have .relation ( [ flux_fort ] ) then implies : _n p_k^n_h^-1/2(^k)=0 , k=1, ... ,k , which ends the proof of the convergence of the continuous algorithm. we first introduce the discrete algorithm defined by : let be a discrete approximation of in at step .then , is the solution in of [ algo_discret ] _^k ( u_h , k^n+1v_h , k + u_h , k^n+1v_h , k ) dx - _ ^kp_h , k^n+1 v_h , k ds = _ ^k f_kv_h , kdx , v_h , kx_h^k , + [ ci_discret ] _^k , ( p_h , k^n+1 + u_h , k^n+1)_h , k , = _^k , ( -p_h,^n + u_h,^n ) _ h , k , , _ h , k , w_h^k , . in order to analyze the convergence of this iterative scheme, we have to precise the norms that can be used on the lagrange multipliers . for any , in addition to the natural norm , we can define two better suited norms as follows p_-12 = ( _ k=1^k p_k_h^-12(^k)^2 ) ^1 2 p_- 1 2 , * = ( _ k=1^k _ = 1 ^k p_k_h^-12_*(^k,)^2 ) ^1 2 , where stands for the dual norm of .we also need a stability result for the lagrange multipliers , and refer to in 2d and to in 3d , in which it is shown that , [ lem.faker ] there exists a constant such that , for any in , there exists an element in that vanishes over and satisfies [ stab1 ] _^k , p_h , k , w^h , k , p_h , k,^2_h^-12_*(^k , ) with a bounded norm [ stab2 ]w^h , k,_h^1(^k ) c _ * p_h , k,_h^-12_*(^k , ) .let denote the orthogonal projection operator from onto .then , for , is the unique element of such that we are now in a position to prove the convergence of the iterative scheme [ theo2 ] let us assume that , for some small enough constant .then , the discrete problem ( [ pbdiscret ] ) has a unique solution .the algorithm ( [ algo_discret])-([ci_discret ] ) is well posed and converges in the sense that _n ( u_h , k^n - u_h , k_h^1(^k ) + _ k p_h , k,^n - p_h , k,_h^-12_*(^k , ) ) = 0 , 1kk . * proof*. for the sake of convenience , we drop out the index in what follows .we first assume that problems ( [ pbdiscret ] ) and ( [ algo_discret])-([ci_discret ] ) are well posed and proceed as in the continuous case and assume that . from ( [ eq : defpi ] )we have and ( [ ci_discret ] ) also reads [ eq : constraintprojn ] p_k^n+1+_k , ( u_k^n+1)= _ k , ( -p_^n+u_^n ) ^k , . by taking in ( [ algo_discret ] ), we thus have _ ^k ( = _ _ ^k , ( ( p_k^n+1+_k , ( u_k^n+1))^2 - ( p_k^n+1-_k , ( u_k^n+1))^2)ds . then , by using the interface conditions ( [ eq : constraintprojn ] ) we obtain _ ^k( |u_k^n+1|^2 + |u_k^n+1|^2 ) dx + _ _ ^k , ( p_k^n+1-_k,(u_k^n+1))^2ds + = _ _ ^k , ( _ k,(p_^n - u_^n))^2 ds .it is straightforward to note that _^k , ( _ k,(p_^n - u_^n))^2ds _ ^k , ( p_^n - u_^n)^2 ds + = _ ^k,(p_^n -_,k(u_^n ) + _ , k(u_^n ) - u_^n)^2ds + = _ ^k,(p_^n -_,k(u_^n))^2 + ^2(_,k(u_^n ) - u_^n)^2ds since is orthogonal to any element in . for the last term above, we recall that ( see in 2d and or equation ( 5.1 ) in 3d ) with similar notations as those introduced in the continuous case , we deduce e^n+1 + b^n+1 c h e^n + b^n and we conclude as in the continuous case : if then . the convergence of towards 0 in the norm follows .taking in ( [ algo_discret ] ) , then using ( [ stab1 ] ) and the convergence of towards 0 in the norm , we derive the convergence of in the norm .note that by having and prove that from which we derive that the square problem ( [ algo_discret])-([ci_discret ] ) is uniquely solvable hence well posed .similarly , having and getting rid of the superscripts and in the previous proof gives ( with obvious notations ) : e + b c h e + b.the existence and uniqueness of a solution of ( [ pbdiscret ] ) then results with similar arguments . in the well - posedness of ( [ pbdiscret ] )is addressed through a more direct proof : let us introduce over the bilinear form the space is endowed with the norm v _ * = ( _ k=1^k v_k_h^1(^k)^2 ) ^1 2 .[ lem.infsup ] there exists and a constant such that moreover , we have the continuity argument : there exists a constant such that [ ineq : continuity ] ( u_h , p_h ) _ h , v_h_k=1^k x_h^k , a((u_h , p_h ) , v_h ) ) c ( u_h _ * + p_h_-12 ) ( v_h _ * ) .this lemma is proven in , based on lemma [ lem.faker ] . from lemma [ lem.infsup ] , we have for any , [ estimuuh ] u- u_h _ * + p- p_h_-1 2 , * c ( u- u_h _ * + p- p_h_-1 2 ) .and we are led to the analysis of the best fit of by elements in .as noticed in , it is well known but unusual that the inf - sup and continuity conditions involve different norms : the and norms .thus , these two different norms appear in and the best approximation analysis will be done using the norm , while the error estimates will involve the norm . the analysis of the best fit as been done in in 2d and 3d for approximations .let us analyze the best approximation of by elements in in the general case of higher order approximations in 2d .in this part we analyze the best approximation of by elements in . following the same lines as in the analysis of the best fit in the situation of , we can prove the following results : [ best - fit ] let , be such that with , and .let us set also over each .then there exists in and , with such that satisfy the coupling condition ( [ disc.const ] ) , and _h -u _ * & & c h^1+m _ k=1^k u_k _ h^2+m(^k ) + c h^m _ = 1^k_kh^1 2+m(^k , ) , + + _ k h - p_k , _h^-1 2(^k , ) & & ch^2+m ( u_k_h^2+m(^k ) + u__h^2+m(^ ) ) + & & + c h^1+m p_k , _h^1 2+m(^k , ) .where is a constant independent of and .if we assume more regularity on the normal derivatives on the interfaces , we have [ best - fit.2 ] under the assumptions of theorem [ best - fit ] and assuming in addition that is in .then there exists in and such that satisfy ( [ disc.const ] ) , and where is a constant independent of and , and if and if .the main part of the proof is independent of the degree of the approximation and is done in .only lemma 4 in is dependent of the degree of the approximation and is only proven for a approximation .we prove it for higher order approximations : [ lem_1 ] assume the degree of the finite element approximation .there exists two constants and independent of such that for all in , there exists an element in , such that [ injectif ] _^k,(_,k+_k,(_,k))_,k c_1_,k _ l^2(^k,)^2 , + [ stable ] _ , k _l^2(^k , ) c_2 _ , k _ l^2(^k , ) .the limit is related to the arguments used in the proof we propose for this lemma , thus , a priori , only technical .we have not found how to alleviate this limit but actually , for applications , this limit is quite above what is generally admitted as the optimal range for the degree of the polynomial in finite element methods . indeed , as regards the question of accuracy with respect to run time , the publication analyses in full details and on a variety of problems and regularity of solutions , the accuracy achieved by low to high order finite element approximations as a function of the number of degrees of freedom and of the run time .it appears that the use of degrees between 5 and 8 is quite competitive which motivates the present analysis .the proof of these results is performed in the following steps .note that lemma [ lem : etapsi_base ] below , that generalizes one of the main arguments in the proof of lemma 4 in to higher degree in 2d would involve , for a similar generalization in 3d ( see lemma 7 of ) , an extension to higher order of the theory developed in that does not exist yet and goes beyond the scope of the present paper .+ [ lem : etapsi_base ] let be an integer .there exists and such that for all ) ] s.t . , and * proof*. this lemma has been proven in the case in . for ,we prove it by studying for a given ) ] such that )\\ \varphi(1)=\eta(1 ) \end{array}}j(\varphi;\eta).\ ] ] the function is strictly concave in and there exists a function satisfying the constraint .this problem admits a solution .the functional being quadratic in and the constraint being affine , the optimality condition shows that the problem reduces to a linear problem the right hand side of which depends linearly of .the affine constraint being of rank one , the problem ( [ eq : minconst ] ) admits a unique solution which depends linearly of .therefore , it makes sense to introduce the operator : ) & \longrightarrow & { \mathbf{p}}_{p-1}([-1,1])\\ \hphantom{s : } \eta & \mapsto & \psi \mathrm{\ solution\ to\ ( \ref{eq : minconst } ) } , \end{array}\ ] ] where ) ] that vanish at . in lemma[ lem : etapsi_base ] , we take .the operator is linear from a finite dimensional space to another so that it is continuous for any norm on these spaces. therefore there exists possibly depending on such that .moreover , the function )\backslash \{0\ } & \longrightarrow & { { \mathbb r}}\\ \hphantom{s : } \eta & \mapsto & \ds\frac{j(s(\eta),\eta)}{\ds\int_{-1}^1 \eta^2 } \end{array}\ ] ] is continuous and such that for any . therefore , it reaches its minimum which is strictly positive as results from the lemma stated and proven in the next subsection and the proof of lemma [ lem : etapsi_base ] is complete. + [ lem : etapsi ] let and ) ] , we get hence , the dual problem writes where and satisfies ( [ eq : psi ] ) .after some calculations , appears a second order polynomial in : - 1,1[)}^2-\frac{9}{2}\frac{\eta_p^2}{2p+1});\ ] ] its leading coefficient is positive and its discriminent is proven to be negative in the next lemma , from which we derive that is positive and the proof is complete .[ lem : delta ] for , the discriminant of ( [ eq : g ] ) : - 1,1[)}^2 + 9\frac{\eta_p^2}{2p+1})\ ] ] is negative if ) ] , and ) ] .the function is quadratic so that it suffices to study the extrema of - 1,1[)} ] and .+ let us consider the vector space .any function in satisfies and . the optimality relation w.r.t . to we have either or solution to ( [ eq : minmaxdiscr ] ) belongs to the space .the first case corresponds to a negative value for which is in agreement with the lemma to be proved .let us study the latter case .we shall make use of ( see ) [ lem : lagrangeprime ] the family of legendre polynomials satisfies for any , , * proof . * we only need to prove the last equality , that results from the above indeed it can be checked easily that moreover , we have and thus lemma [ lem : lagrangeprime ] .+ therefore , there exists s.t . .since is defined up to a constant , we only have to consider the two cases or .+ * case 1 : * + from , we get so that since is supposed larger than 1 , the leading coefficient of is negative .if the discriminant of is negative , the polynomial is negative for any .this discriminant has the value and is negative for . +* case 2 : * + from , we get so that since is an eigenvalue , it is not zero and the above relation shows that we can take . then , we have so that which ends the proof of lemma [ lem : delta ] . and the estimate ( [ eq : logh ] ) is standard . for ,the proof is the same as for lemma 5 in : let be the unique element of defined as follows : then , using deny - lions theorem we have ^{\ell , k},x_1^{\ell , k } [ ) } \\ & + ch^{1 + 2p}\|p_{k,\ell}\|^2_{h^{{1\over 2}+p}([x_1^{\ell , k},x_{n-1}^{\ell , k } ] ) } + \| { \bar p}_{k \ell h } - p_{k,\ell } \|^2_{l^2(]x_{n-1}^{\ell , k},x_n^{\ell , k}[)}. \nonumber\end{aligned}\ ] ] in order to analyze the two extreme contributions , we use deny - lions theorem _ k h - p_k , ^2_l^2(]x_0^,k , x_1^,k [ ) ch^1 + 2p-2q^2_l^q(]x_0^,k , x_1^,k [ ) , and taking , we finish the proof as for lemma 5 in . the authors would like to thank franois cuvelier for his help in the implementation of the test case of section [ subsubsec : err12dom ] , especially for his development of a freefem++ code that generates automatically the meshes with different refinement levels , that we used for our numerical results ., _ domain decomposition method and the helmholtz problem .ii _ , kleinman ralph ( eds ) et al . , mathematical and numerical aspects of wave propagation. proceedings of the 2nd international conference held in newark , de , usa , june 7 - 10 , 1993 .philadelphia , pa : siam , ( 1993 ) , pp .197 - 206 ., it a new cement to glue non - conforming grids with robin interface conditions : the finite element case , domain decomposition methods in science and engineering series : lecture notes in computational science and engineering , vol .40 , kornhuber , r. ; hoppe , r. ; periaux , j. ; pironneau , o. ; widlund , o. ; xu , j. ( eds . ) , ( 2004 ) ., _ optimized krylov - ventcell method .application to convection - diffusion problems _ , proceedings of the 9 international conference on domain decomposition methods , 3 - 8 june 1996 , bergen ( norway ) , domain decomposition methods in sciences and engineering , edited by p. bjorstad , m. espedal and d. keyes ( 1998 ) , p. 382 - 389 . , _ on the schwarz alternating method iii : a variant for nonoverlapping subdomains _ , third international symposium on domain decomposition methods for partial differential equations , siam ( 1989 ) , pp .202 - 223 . ,_ additive schwarz methods with nonreflecting boundary conditions for the parallel computation of helmholtz problems _ , in xiao - chuan cai , charbel farhat and jan mandel , editors , tenth international symposium on domain decomposition methods for partial differential equations , ams , ( 1997 ) .
in we proposed a new non - conforming domain decomposition paradigm , the new interface cement equilibrated mortar ( nicem ) method , based on schwarz type methods that allows for the use of robin interface conditions on non - conforming grids . the error analysis was done for finite elements , in 2d and 3d . in this paper , we provide new numerical analysis results that allow to extend this error analysis in 2d for piecewise polynomials of higher order and also prove the convergence of the iterative algorithm in all these cases . optimized schwarz domain decomposition , robin transmission conditions , finite element methods , non - conforming grids , error analysis , piecewise polynomials of high order , nicem method .
optoacoustic tomography ( oat ) , also known as photoacoustic computed tomography , is an emerging imaging modality that has great potential for a wide range of biomedical imaging applications . in oat , a short laser pulseis employed to irradiate biological tissues .when the biological tissues absorb the optical energy , acoustic wave fields can be generated via the thermoacoustic effect .the acoustic wave fields propagate outward in three - dimensional ( 3d ) space and are measured by use of ultrasonic transducers that are distributed outside the object .the goal of oat is to obtain an estimate of the absorbed energy density map within the object from the measured acoustic signals . to accomplish this ,an image reconstruction algorithm is required .a variety of analytic image reconstruction algorithms have been proposed .these algorithms generally assume an idealized transducer model and an acoustically homogeneous medium .also , since they are based on discretization of continuous reconstruction formulae , these algorithms require the acoustic pressure to be densely sampled over a surface that encloses the object to obtain an accurate reconstruction . to overcome these limitations , iterative image reconstruction algorithmshave been proposed .although the optoacoustic wave intrinsically propagates in 3d space , when applying to experimental data , most studies have employed two - dimensional ( 2d ) imaging models by making certain assumptions on the transducer responses and/or the object structures .an important reason is because the computation required for 3d oat image reconstruction is excessively burdensome .therefore , acceleration of 3d image reconstruction will facilitate algorithm development and many applications including real - time 3d pact .a graphics processing unit ( gpu ) card is a specialized device specifically designed for parallel computations .compute unified device architecture ( cuda ) is an extension of the c / fortran language that provides a convenient programming platform to exploit the parallel computational power of gpus .the cuda - based parallel programming technique has been successfully applied to accelerate image reconstruction in mature imaging modalities such as x - ray computed tomography ( ct ) and magnetic resonance imaging ( mri ) . in oat , however , only a few works on utilization of gpus to accelerate image reconstruction have been reported .for example , the k - wave toolbox employs the nvidia cuda fast fourier transform library ( cufft ) to accelerate the computation of 3d fft .also a gpu - based sparse matrix - vector multiplication strategy has been applied to 3d oat image reconstruction for the case that the system matrix is sparse and can be stored in memory .however , there remains an important need to develop efficient implementations of oat reconstruction algorithms for general applications in which the system matrix is too large to be stored .in this work , we propose parallelization strategies , for use with gpus , to accelerate 3d image reconstruction in oat .both filtered backprojection ( fbp ) and iterative image reconstruction algorithms are investigated . for use with iterative image reconstruction algorithms ,we focus on the parallelization of projection and backprojection operators .specifically , we develop two pairs of projection / backprojection operators that correspond to two distinct discrete - to - discrete ( d - d ) imaging models employed in oat , namely the interpolation - based and the spherical - voxel - based d - d imaging models .note that our implementations of the backprojection operators compute the exact adjoint of the forward operators , and therefore the projector pairs are ` matched ' .the remainder of the article is organized as follows . in section [sect : background ] , we briefly review oat imaging models in their continuous and discrete forms .we propose gpu - based parallelization strategies in section [ sect : gpumethods ] .numerical studies and results are described in section [ sect : nummethods ] and section [ sect : results ] respectively . finally , a brief discussion and summary of the proposed algorithms are provided in section [ sect : summary ] .a continuous - to - continuous ( c - c ) oat imaging model neglects sampling effects and provides a mapping from the absorbed energy density function to the induced acoustic pressure function . here, is the temporal coordinate , and denote the locations within the object support and on the measurement surface , respectively .a canonical oat c - c imaging model can be expressed as : where is the dirac delta function , , , and denote the thermal coefficient of volume expansion , ( constant ) speed - of - sound , and the specific heat capacity of the medium at constant pressure , respectively .we introduce an operator notation to denote this c - c mapping .alternatively , eqn . can be reformulated as the well - known spherical radon transform ( srt ) : where the function is related to as the srt model provides an intuitive interpretation of each value of as a surface integral of over a sphere centered at with radius . based on c - c imaging models , a variety of analytic image reconstruction algorithms have been developed .for the case of a spherical measurement geometry , an fbp algorithm in its continuous form is given by : {t=\frac{|\mathbf r-\mathbf r^s|}{c_0}},\ ] ] where denotes the radius of the measurement surface .when sampling effects are considered , an oat system is properly described as a continuous - to - discrete ( c - d ) imaging model : {qk+k } = h^e(t ) * _ t \frac{1}{s_q } \int_{s_q}\!\ ! d\mathbf r^s\ , p(\mathbf r^s , t)\big|_{t = k\delta_t},\quad \substack { q=0,1,\cdots , q-1\\ k=0,1,\cdots , k-1 } , \ ] ] where and denote the total numbers of transducers ( indexed by ) and the time samples ( indexed by ) respectively . is the surface area of the -th transducer , which is assumed to be a subset of ; denotes the acousto - electric impulse response ( eir ) of each transducer that , without loss of generality , is assumed to be identical for all transducers ; ` ' denotes a linear convolution with respect to time coordinate ; and is the temporal sampling interval .the vector represents the lexicographically ordered measured voltage signals whose -th element is denoted by {qk+k} ] and is the expansion function . on substitution from eqn . into eqn . , where is defined by eqn ., one obtains a d - d mapping from to , expressed as where each element of the matrix is defined as {qk+k , n } = \big [ h^e * _ t \frac{1}{s_q } \int_{s_q}\!\!d \mathbf r^s \mathcal h_{\rm cc } \psi_n \big]_{t = k\delta_t}.\ ] ] here , is the d - d imaging operator also known as system matrix or projection operator .note that the ` ' in eqn .is due to the use of the finite - dimensional representation of the object function ( i.e. , eqn . ) .no additional approximations have been introduced . below we describe two types of d - d imaging models that have been employed in oat : the interpolation - based imaging model and the spherical - voxel - based imaging model .the quantities , , and ( or ) in the two models will be distinguished by the subscripts ( or superscripts ) ` int ' and ` sph ' , respectively .the interpolation - based d - d imaging model defines the coefficient vector as samples of the object function on the nodes of a uniform cartesian grid : = \int_v\!\!d \mathbf r\ ,\delta(\mathbf r-\mathbf r_n ) a(\mathbf r),\quad n=0,1,\cdots , n-1,\ ] ] where , specifies the location of the -th node of the uniform cartesian grid .the definition of the expansion function depends on the choice of interpolation method .if a trilinear interpolation method is employed , the expansion function can be expressed as : where is the distance between two neighboring grid points . in principle , the interpolation - based d - d imaging model can be constructed by substitution from eqns . and to eqn . .in practice , however , implementation of the surface integral over is difficult for the choice of expansion functions in eqn . .also , implementations of the temporal convolution and usually require extra discretization procedures . therefore ,utilization of the interpolation - based d - d model commonly assumes the transducers to be point - like . in this case , the implementation of is decomposed as a three - step operation : where , , and are discrete approximations of the srt ( eqn . ) , the differential operator ( eqn . ) , and the operator that implements a temporal convolution with eir , respectively .we implemented in a way that is similar to the ` ray - driven ' implementation of radon transform in x - ray ct , i.e , for each data sample , we accumulated the contributions from the voxels that resided on the spherical shell specified by the data sample . by use of eqns ., , , and , one obtains : {qk+k } = \delta_s^2 \sum_{n=0}^{n-1 } \big[\boldsymbol \alpha_{\rm int}\big]_n \sum_{i=0}^{n_i-1 } \sum_{j=0}^{n_j-1 } \psi_n^{\rm int}(\mathbf r_{k , i , j } ) \equiv \big[\mathbf g\big]_{qk+k } , \ ] ] where {qk+k } \approx g(\mathbf r^s_q , t)|_{t = k\delta_t} ] .finally , the continuous temporal convolution is approximated by a discrete linear convolution as {qk+k } = \sum_{\kappa = 0}^{k-1 } [ \mathbf h^e ] _{ k-1-\kappa } [ \mathbf p_{\rm int}]_{qk+\kappa } \equiv [ \mathbf u_{\rm int}]_{qk+k},\ ] ] where {k } = \delta_t h^e(t)|_{t = k\delta_t} ] was equally divided with interval , starting from . at each polar angle ,a ring on the sphere that was parallel to the plane can be specified , resulting rings . on each ring , ultrasonic transducers were assumed to be uniformly distributed with azimuth angle interval .hereafter , each azimuth angle will be referred to as a tomographic view .at each view , we assumed that temporal samples were acquired and the first sample corresponded to time instance .for implementations in temporal - frequency domain , we assumed that temporal - frequency samples were available and the first sample corresponded to .the region to be reconstructed was a rectangular cuboid whose edges were parallel to the axes of the coordinate system and the left - bottom - back vertex was located at .the numbers of voxels along the three coordinates will be denoted by , , and , respectively , totally voxels .we also assumed the cuboid was contained in another sphere of radius that was concentric with the measurement sphere shown in fig .[ fig : geo]-(b ) . central processing unit ( cpu)-based implementations of continuous fbp formulae have been described in refs . .though the discretization methods vary , in general , three approximations have to be employed .firstly , the first - order derivative term has to be approximated by a difference scheme up to certain order .secondly , the measurement sphere has to be divided into small patches , and the surface integral has to be approximated by a summation of the area of every patch weighted by the effective value of the integrand on the patch .finally , the value of the integrand at an arbitrary time instance has to be approximated by certain interpolation method . in this study, we approximated the surface integral by use of the trapezoidal rule .as described earlier , the spherical surface was divided into patches . for the transducer indexed by that was located at ,the area of the patch was approximated by .the value at time instance was approximated by the linear interpolation from its two neighboring samples as : {qk+k } + \big(\tilde k - k\big)\big[\mathbf p\big]_{qk+k+1},\ ] ] where , and is the integer part of . here is a vector of lexicographically ordered samples of the pressure function , which is estimated from the measured voltage data vector .also , the first - order derivative term was approximated by : {qk+k+1 } - \big[\mathbf p\big]_{qk+k } \big).\ ] ] by use of these three numerical approximations , the discretized fbp formula was expressed as : = -\frac{c_pr^s\delta_{\theta^s}\delta_{\phi^s}}{\pi\beta c_0 ^ 3\delta_t } \sum_{n_r=0}^{n_r-1}\sin \theta^s_q & \sum_{n_v=0}^{n_v-1 } \bigg\ { \big(1.5-\frac { k+t_{\rm min}/\delta_t } { \tilde k + t_{\rm min}/\delta_t}\big)\big[\mathbf p\big]_{qk+k+1}\\ & + \big(\frac { k+1+t_{\rm min}/\delta_t } { \tilde k + t_{\rm min}/\delta_t}-1.5\big)\big[\mathbf p\big]_{qk+k } \bigg\}. \end{split}\ ] ] unlike the implementations of fbp formulas in x - ray cone beam ct , we combined the filter and the linear interpolation .this reduced the number of visits to the global memory in the gpu implementation described below .we implemented the fbp formula in a way that is similar to the ` pixel - driven ' implementation in x - ray ct , i.e. , we assigned each thread to execute the two accumulative summations in eqn . for each voxel .we bound the pressure data to texture memory because it is cached and has a faster accessing rate .therefore our implementation only requires access to texture memory twice and to global memory once .the pseudo - codes are provided in algs .[ alg : fbp ] and [ alg : k_fbp ] for the host part and the device part respectively .note that the pseudo - codes do not intend to be always optimal because the performance of the codes could depend on the dimensions of and .for example , we set the block size to be because for our applications , was bigger than and and smaller than the limit number of threads that a block can support ( i.e. , 1024 for the nvidia tesla c2050 ) . if the values of , , and change , we may need to redesign the dimensions of the grid and blocks .however , the general simd parallelization strategy remains .the forward projection operation is composed of three consecutive operations , , and that are defined in eqns . , , and , respectively .both the difference operator and the one - dimensional ( 1d ) convolution have low computational complexities while the srt operator is computationally burdensome .hence , we developed the gpu - based implementation of while leaving and to be implemented by cpus .the srt in oat shares many features with the radon transform in x - ray ct .thus , our gpu - based implementation is closely related to the implementations of radon transform that have been optimized for x - ray ct .the surface integral was approximated according to the trapezoidal rule .firstly , the integral surface was divided into small patches , which is described in the appendix .secondly , each patch was assigned an effective value of the object function by trilinear interpolation .the trilinear interpolation was calculated by use of the texture memory of gpus that is specifically designed for interpolation .finally , gpu threads accumulated the areas of patches weighted by the effective values of the object function and wrote the final results to global memory .the pseudo - codes for implementation of are provided in algs .[ alg : inth ] and [ alg : k_inth ] for the host part and the device part , respectively .note that we employed the `` one - level''-strategy , i.e. , each thread calculates one data sample .higher level strategies have been proposed to improve the performance by assigning each block to calculate multiple data samples , which , however , caused many thread idles in oat mainly because the amount of computation required to calculate a data sample varies largely among samples for srt .implementation of the backprojection operator was very similar to the implementation of .the operators and were calculated on cpus while was calculated by use of gpus .the pseudo - codes are provided in algs .[ alg : intht ] and [ alg : k_intht ] .we made use of the cuda function ` atomicadd ' to add weights to global memory from each thread .implementation of the forward projection operation for the spherical - voxel - based imaging model is distinct from that of the interpolation - based model .the major difference is that calculation of each element of the data vector for the spherical - voxel - based imaging model requires the accumulation of the contributions from all voxels because the model is expressed in the temporal frequency domain . because of this , the amount of computation required to calculate each data sample in the spherical - voxel - based imaging model is almost identical , simplifying the parallelization strategy .we proposed a parallelization strategy that was inspired by one applied in advanced mri reconstruction and is summarized as follows .discrete samples of defined in eqn . were precalcualted and stored as a vector in constant memory . because the size of the input vector is often too large to fit in the constant memory , we divided into sub - vectors that matched the capacity of the constant memory .we employed a cpu loop to copy every sub - vector sequentially to the constant memory and call the gpu kernel function to accumulate a partial summation .the major advantage of this design is that the total number of global memory visits to calculate one data sample is reduced to the number of sub - vectors .implementation of the projection operator for the spherical - voxel - based imaging model generally involves more arithmetic operations than does the interpolation - based imaging model .moreover , the spherical - voxel - based imaging model has been employed to compensate for the finite aperture size effect of transducers , which makes the computation even more burdensome .because of this , we further developed an implementation that employed multiple gpus .the pseudo - codes of the projection operation are provided in algs .[ alg : sphh ] , [ alg : fwd_pthread ] , and [ alg : kfwdsph ] .we created pthreads on cpus by use of the ` pthread.h ' library .here , we denote the threads on cpus by ` pthread ' to distinguish from threads on gpus .we divided the input vector into sub - vectors ( denoted by s ) of equal size and declared an output vector of dimension . by calling the pthread function ` fwd_pthread ' , pthreads simultaneously calculated the projection .each pthread projected an to a partial voltage data vector that filled in the larger vector .once all pthreads finished filling their into , the projection data were obtained by a summation of the s .implementation of the backprojection operator was similar except the dividing and looping were over the vector instead of .the pseudo - codes for the backprojection operation are provided in algs .[ alg : sphht ] , [ alg : bwd_pthread ] , and [ alg : kbwdsph ] .the computational efficiency and accuracy of the proposed gpu - based implementations of the fbp algorithm and projection / backprojection operators for use with iterative image reconstruction algorithms were quantified in computer simulation and experimental oat imaging studies ._ numerical phantom : _ the numerical phantom consisted of uniform spheres that were blurred by a 3d gaussian kernel possessing a full width at half maximum ( fwhm ) of -mm .the phantom was contained within a cuboid of size -mm . a 2d image corresponding to the plane through the phantomis shown in fig .[ fig : fbp]-(a ) . .5 cm _ simulated projection data : _ the measurement surface was a sphere of radius -mm . corresponding to an exsiting oat imaging system .as described in section [ sect : gpumethods ] , ideal point - like transducers were uniformly distributed over rings and tomographic views .the rings covered the full polar angle , i.e. , , while the views covered the full azimuth angle .the speed of sound was set at -mm/ .we selected the grneisen coefficient as of arbitrary units ( a.u . ) . for each transducer, we analytically calculated temporal samples of the pressure function at the sampling rate of -mhz by use of eqn . .because we employed a smooth object function , the pressure data were calculated by the following two steps : firstly , we calculated temporal samples of pressure function that corresponds to the uniform spheres by {t = k\delta_t } , & { \rm if } \ ; \big|c_0k\delta_t-|\mathbf r^s -\mathbf r_i| \big| \leq r_i \\ 0 , & { \rm otherwise } \end{array}\right.\ ] ] where , and denote the center location , the radius and the absorbed energy density of the -th sphere , respectively .subsequently , we convolved with a one - dimensional ( 1d ) gaussian kernel with - to produce the pressure data .from the simulated pressure data , we calculated the temporal - frequency spectrum by use of fast fourier transform ( fft ) , from which we created an alternative data vector that contained frequency components occupying 12 & 12#1212_12%12[1][0] in _ _ , ( , ) chap . * * , ( ) * * , ( ) link:\doibase 10.1364/ao.45.001866 [ * * , ( ) ] * * , ( ) * * , ( ) * * ( ) * * , ( ) link:\doibase 10.1121/1.1501898 [ * * , ( ) ] link:\doibase 10.1118/1.2409234 [ * * , ( ) ] link:\doibase 10.1117/1.2992131 [ * * , ( ) ] link:\doibase 10.1109/tmi.2009.2024082 [ * * , ( ) ] link:\doibase 10.1109/tmi.2008.2007825 [ * * , ( ) ] link:\doibase 10.1109/tmi.2010.2072514 [ * * , ( ) ] link:\doibase 10.1117/1.3381187 [ * * , ( ) ] ( , ) p.link:\doibase 10.1117/1.3443793 [ * * , ( ) ] link:\doibase 10.1117/1.3584847 [ * * , ( ) ] link:\doibase 10.1118/1.3556916 [ * * , ( ) ] link:\doibase 10.1109/tbme.2012.2187649 [ * * , ( ) ] ( , ) p. http://stacks.iop.org/0031-9155/57/i=17/a=5399 [ * * , ( ) ] link:\doibase 10.1117/1.jbo.17.6.066016 [ * * , ( ) ] link:\doibase 10.1109/tmi.2012.2208471 [ * * , ( ) ] link:\doibase 10.1117/1.jbo.17.6.061211 [ * * , ( ) ] link:\doibase 10.1364/boe.3.001427 [ * * , ( ) ] link:\doibase 10.1364/oe.20.022712 [ * * , ( ) ] link:\doibase 10.1109/mm.2008.31 [ * * , ( ) ] _ _ ( ) link:\doibase 10.1155/2009/149079 [ * * , ( ) ] link:\doibase 10.1016/j.parco.2010.01.004 [ * * , ( ) ] link:\doibase 10.1118/1.3591994 [ * * , ( ) ] link:\doibase 10.1016/j.jpdc.2008.05.013 [ * * , ( ) ] link:\doibase 10.1117/1.3360308 [ * * , ( ) ] _ _ ( , ) _ _ ( , , ) * * ( ) in _ _ , ( , ) chap .link:\doibase 10.1109/tuffc.2011.1809 [ * * , ( ) ] _ _ ( , ) * * , ( ) link:\doibase 10.1109/tmi.2005.852055 [ * * , ( ) ] _ _ ( , , ) * * , ( ) link:\doibase 10.1117/1.2992131 [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1109/42.870265 [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1117/1.3259361 [ * * , ( ) ] * * , ( ) * * , ( ) `` , '' ( , ) link:\doibase 10.1109/83.760336 [ * * , ( ) ] link:\doibase 10.1364/oe.20.016510 [ * * , ( ) ] link:\doibase 10.1109/tip.2009.2028250 [ * * , ( ) ] link:\doibase 10.1561/2200000016 [ * * , ( ) ] * * , ( ) link:\doibase 10.1137/070682137 [ * * , ( ) ] * * , ( ) link:\doibase 10.1109/tmi.2010.2044584 [ * * , ( ) ]\1 . ( a ) schematic of the 3d oat scanning geometry .( b ) schematic of the local coordinate system for the implementation of interpolation - based d - d imaging model ..3 cm \2 .slices corresponding to the plane of ( a ) the phantom and the images reconstructed by use of ( b ) the cpu - based and ( c ) the gpu - based implementations of the fbp algorithm from the `` ''-data ..3 cm \3 .slices corresponding to the plane of the images reconstructed by use of the fbp algorithm with ( a ) the cpu - based implementation from the `` ''-data , ( b ) the cpu - based implementation from the `` ''-data , ( c ) the cpu - based implementation from the `` ''-data , ( d ) the gpu - based implementation from the `` ''-data , ( e ) the gpu - based implementation from the `` ''-data , and ( f ) the gpu - based implementation from the `` ''-data ..3 cm \4 .slices corresponding to the plane of the images reconstructed by use of the gpu - based implementations of ( a ) the pls - int algorithm from the `` ''-data , ( b ) the pls - int algorithm from the `` ''-data , ( c ) the pls - int algorithm from the `` ''-data , ( d ) the pls - sph algorithm from the `` ''-data , ( e ) the pls - sph algorithm from the `` ''-data , and ( f ) the pls - sph algorithm from the `` ''-data ..3 cm \5 .profiles along the line -mm of the images reconstructed by use of ( a ) the cpu- and gpu - based implementations of the fbp algorithm from the `` ''-data , and ( b ) the gpu - based implementations of the pls - int and the pls - sph algorithms from the `` ''-data ..3 cm \7 .mip renderings of the 3d images of the mouse body reconstructed by use of the gpu - based implementations of ( a ) the fbp algorithm from the `` full data '' , ( b ) the pls - int algorithm from the `` full data '' with , ( c ) the pls - sph algorithm from the `` full data '' with , ( d ) the fbp algorithm from the `` quarter data '' , ( e ) the pls - int algorithm from the `` quarter data '' with , and ( f ) the pls - sph algorithm from the `` quarter data '' with .the grayscale window is [ 0,12.0 ] . , , ; ; ; ; ; ; ; ; { \rm t}\!\_\mathbf p[n_r][n_v][n_t ] + \big [ 1.5 - ( n_t\delta_t+t_{\rm min})/(t_n\delta_t+t_{\rm min})\big ] { \rm t}\!\_\mathbf p[n_r][n_v][n_t+1 ] \big\} ] , ; ; ; ; ; ; ; ; ; ; [{\rm blockidx.x}][{\rm blockidx.y}]=\sigma \delta_s^2 ] , parm_fwdarg[. = parm_fwdarg[. = & ] parm_fwdarg[.= ] , , d_ , c_ , c_ d_ ; ; ; ; ; ; \big ( \tilde{h}^r { \rm c}\!\_\tilde{\mathbf p}_0[{\rm threadidx.x}].r -\tilde{h}^i { \rm c}\!\_\tilde{\mathbf p}_0[{\rm threadidx.x}].i \big) ] [{\rm blockidx.x}][{\rm threadidx.x}].r\,+\!\!= \sigma^r ] , parm_bwdarg[. = parm_bwdarg[. = & ] parm_bwdarg[.= ] , , , d_ , c_ , c_ d_ ; ; ; ; .r\big ( \tilde{h}^r { \rm c}\!\_\tilde{\mathbf p}_0[n_f].r -\tilde{h}^i { \rm c}\!\_\tilde{\mathbf p}_0[n_f].i \big ) + { \rm c}\!\_\tilde{\mathbf u}_{\rm pth}[n_f].i\big ( \tilde{h}^i { \rm c}\!\_\tilde{\mathbf p}_0[n_f].r + \tilde{h}^r { \rm c}\!\_\tilde{\mathbf p}_0[n_f].i \big) ]
* purpose : * optoacoustic tomography ( oat ) is inherently a three - dimensional ( 3d ) inverse problem . however , most studies of oat image reconstruction still employ two - dimensional ( 2d ) imaging models . one important reason is because 3d image reconstruction is computationally burdensome . the aim of this work is to accelerate existing image reconstruction algorithms for 3d oat by use of parallel programming techniques . * methods : * parallelization strategies are proposed to accelerate a filtered backprojection ( fbp ) algorithm and two different pairs of projection / backprojection operations that correspond to two different numerical imaging models . the algorithms are designed to fully exploit the parallel computing power of graphic processing units ( gpus ) . in order to evaluate the parallelization strategies for the projection / backprojection pairs , an iterative image reconstruction algorithm is implemented . computer - simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms . * results : * the gpu implementations improve the computational efficiency by factors of , , and for the fbp algorithm and the two pairs of projection / backprojection operators , respectively . accurate images are reconstructed by use of the fbp and iterative image reconstruction algorithms from both computer - simulated and experimental data . * conclusions : * parallelization strategies for 3d oat image reconstruction are proposed for the first time . these gpu - based implementations significantly reduce the computational time for 3d image reconstruction , complementing our earlier work on 3d oat iterative image reconstruction .
disks around young stars are thought to be the sites of planet formation .however , many questions exist concerning how the gas and dust in the disk evolve into a planetary system . observations of t tauri stars ( tts ) may provide insights into these questions , and a subset of tts , the `` transitional disks , '' have gained increasing attention in this regard .the unusual seds of transitional disks ( which feature infrared excess deficits ) may indicate that they have developed significant radial structure .transitional disk seds were first identified by from near - infrared ( nir ) ground - based photometry and iras mid - infrared ( mir ) photometry .these systems exhibited small nir and mir excesses , but significant mid- and far - ir ( fir ) excesses indicating that the _ dust _ distribution of these disks had an inner hole ( i.e. , a region that is mostly devoid of small dust grains from a radius r down to the central star ) .they proposed that these disks were in transition from objects with optically thick disks that extend inward to the stellar surface ( i.e. , class ii objects ) to objects where the disk has dissipated ( i.e. , class iii objects ) , possibly as a result of some phase of planet formation .a few years later , proposed that such transitional disk seds were consistent with the expectations for a disk subject to tidal effects exerted by companions , either stars or planets .more detailed studies of transitional disks became possible as increasingly sophisticated instruments became available .the spectrographs on board _ iso _ were able to study the brightest stars and inferred a large hole in the disk of the herbig ae star hd 100546 based on its sed .usage of the term `` transitional disk '' gained substantial momentum in the literature after the _ spitzer space telescope _ infrared spectrograph ( irs ; * ? ? ?* ) was used to study disks with inner holes ( e.g. , * ? ? ?* ; * ? ? ?_ spitzer _ also detected disks with an annular `` gap '' within the disk as opposed to holes ( e.g. , * ? ? ?* ; * ? ? ?in this review , we use the term _ transitional disk _ to refer to an object with an inner disk hole and _ pre - transitional disk _ to refer to a disk with a gap . for many ( pre-)transitional disks ,the inward truncation of the outer dust disk has been confirmed , predominantly through ( sub)millimeter interferometric imaging ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?we note that ( sub)mm imaging is not currently capable of distinguishing between a hole or gap in the disk ( i.e. , it can only detect a generic region of clearing or a `` cavity '' in the disk ) .also , it has not yet been confirmed if these clearings detected in the dust disk are present in the gas disk as well .the combination of these dust cavities with the presence of continuing gas accretion onto the central star is a challenge to theories of disk clearing .the distinct seds of ( pre-)transitional disks have lead many researchers to conclude that these objects are being caught in an important phase in disk evolution .one possibility is that these disks are forming planets given that cleared disk regions are predicted by theoretical planet formation models ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?potentially supporting this , there exist observational reports of protoplanet candidates in ( pre-)transitional disks ( e.g. , lkca 15 , t cha ; * ? ? ?* ; * ? ? ?stellar companions can also clear the inner disk but many stars harboring ( pre-)transitional disks are single stars .even if companions are not responsible for the clearings seen in all ( pre-)transitional disks , these objects still have the potential to inform our understanding of how disks dissipate , primarily by providing constraints for disk clearing models involving photoevaporation and grain growth . in this chapter, we will review the key observational constraints on the dust and gas properties of ( pre-)transitional disks and examine these in the context of theoretical disk clearing mechanisms . in 2 , we look at seds ( 2.1 ) as well as ( sub)mm ( 2.2 ) and ir ( 2.3 ) imaging .we also review ir variability in ( pre-)transitional disks ( 2.4 ) and gas observations ( 2.5 ) . in 3 , we turn the observations from 2 into constraints for the main disk clearing mechanisms proposed to date ( i.e. , photoevaporation , grain growth , and companions ) and discuss these mechanisms in light of these constraints . in 4 , we examine the demographics of ( pre-)transitional disks ( i.e. , frequencies , timescales , disk masses , accretion rates , stellar properties ) in the context of disk clearing and in 5 we conclude with possibilities for future work in this field .in the two decades following s identification of the first transitional disks using nir and mir photometry , modeling of the seds of these disks , enabled largely by _ spitzer _ irs , inferred the presence of holes and gaps that span several au ( 2.1 ) .many of these cavities were confirmed by ( sub)mm interferometric imaging ( 2.2 ) and ir polarimetric and interferometric ( 2.3 ) images .later on , mir variability was detected which pointed to structural changes in these disks ( 2.4 ) . while there are not currently as many constraints on the gas in the disk as there are for the dust , it is apparent that the nature of gas in the inner regions of ( pre-)transitional disks differs from other disks ( 2.5 ) .these observational results have significant implications for our understanding of planet formation and we review them in the following sections .+ + + * 2.1 spectral energy distributions * seds are a powerful tool in disk studies as they provide information over a wide range of wavelengths , tracing different emission mechanisms and material at different stellocentric radii . in a sed , one can see the signatures of gas accretion ( in the ultraviolet ; see ppiv review by * ? ? ?* ) , the stellar photosphere ( typically in tts ) , and the dust in the disk ( in the ir and longer wavelengths ) .however , seds are not spatially resolved and this information must be supplemented by imaging , ideally at many wavelengths ( see 2.22.3 ) . herewe review what has been learned from studying the seds of ( pre-)transitional disks , particularly using _ spitzer _ irs , irac , and mips ._ sed classification _a popular method of identifying transitional disks is to compare individual seds to the median sed of disks in the taurus star - forming region ( fig .[ figseds ] , dashed line in panels ) .the median taurus sed is typically taken as representative of an optically thick full disk ( i.e. , a disk with no significant radial discontinuities in its dust distribution ) . the nir emission ( 15 ) seen in the seds of full disksis dominated by the `` wall '' or inner edge of the dust disk .this wall is located where there is a sharp change at the radius at which the dust destruction temperature is reached and dust sublimates .tts in taurus have nir excess emission which can be fit by blackbodies with temperatures within the observed range of dust sublimation temperatures ( 10002000 k ; * ? ? ?* ) , indicating that there is optically thick material located at the dust destruction radius in full disks .roughly , the mir emission in the sed traces the inner tens of au in disks and emission at longer wavelengths comes from outer radii of the disk .the seds of transitional disks are characterized by nir ( 15 ) and mir emission ( 520 ) similar to that of a stellar photosphere , while having excesses at wavelengths and beyond comparable to the taurus median , ( fig .[ figseds ] , bottom ; * ? ? ?from this we can infer that the small , hot dust that typically emits at these wavelengths in full disks has been removed and that there is a large hole in the inner disk , larger than can be explained by dust sublimation ( fig .[ figsch2 ] , bottom ) .large clearings of dust in the submm regime have been identified in disks characterized by this type of sed ( 2.2 ; e.g. , * ? ? ?? * ; * ? ? ?* ) , confirming the sed interpretation .we note that disks with holes have also been referred to as cold disks or weak excess transitional disks , but here we use the term `` transitional disks . '' a subset of disks with evidence of clearing in the submm show significant nir excesses relative to their stellar photospheres , in some cases comparable to the median taurus sed , but still exhibit mir dips and substantial excesses beyond ( fig . [ figseds ] , middle ) .this nir excess is blackbody - like with temperatures expected for the sublimation of silicates , similar to the nir excesses in full disks discussed earlier .this similarity indicates that these disks still have optically thick material close to the star , possibly a remnant of the original inner disk , and these gapped disks have been dubbed pre - transitional disks , cold disks , or warm transitional disks .here we adopt the term `` pre - transitional disks '' for these objects . in table 1we summarize some of properties of many of the well - known ( pre-)transitional disks .we note that some seds have emission that decreases steadily at all wavelengths ( fig .[ figseds ] , top ) .these disks have been called a variety of names : anemic , homologously depleted , evolved , weak excess transition . herewe adopt the terminology `` evolved disk '' for this type of object .some researchers include these objects in the transitional disk class .however , these likely comprise a heterogenous class of disks , including cleared inner disks , debris disks ( see chapter in this volume by _ matthews et al ._ ) , and disks with significant dust grain growth and settling .this has been an issue in defining this subset of objects .we include evolved disks in this review for completeness , but focus on disks with more robust evidence for disk holes and gaps .lcccccc [ tabbest ] ab aur & ptd & 70 & ... & 6 au & 1.3 & 1 , 25 , 38 + coku tau & td & ... & ... & binary & & 26 , 39 + dm tau & td & 19 & ... & 6 au & 2.9 & 2 , 25 , 40 + gm aur & td & 28 & yes & 6 au & 9.6 & 2 , 11 , 25 , 40 + hd 100546 & ptd & ... & yes & ... & 5.9 & 12 , 41 + hd 141569 & ptd & ... & yes & ... & 7.4 & 13 , 38 + hd 142527 & ptd & 140 & yes & binary & 9.5 & 3 , 14 , 27 , 38 + hd 169142 & ptd & ... & yes & ... & 9.1 & 15 , 38 + irs 48 & td & 60 & ... & 8 au & 4.0 & 4 , 28 , 38 + lkca 15 & ptd & 50 & yes & 6 au & 3.1 & 2 , 16 , 25 , 40 + mwc 758 & ptd & 73 & ... & 28 au & 4.5 & 2 , 29 , 38 + pds 70 & ptd & ... & yes & 6 au & & 17 , 30 , 42 + rx j1604 - 2130 & td & 70 & yes & 6 au & & 5 , 18 , 31 + rx j1615 - 3255 & td & 30 & no & 8 au & 4 & 2 , 19 , 33 , 43 + rx j1633 - 2442 & td & 25 & ... & 6 au & 1.3 & 6 , 25 , 44 + ry tau & ptd & 14 & ... & 6 au & 6.49.1 & 7 , 25 , 45 + sao 206462 & ptd & 46 & ... & 25 au & 4.5 & 2 , 32 , 38 + sr 21 & td & 36 & no & 8 au & .4 & 2 , 20 , 33 , 46 + sr 24 s & ptd & 29 & ... & 8 au & 7.1 & 2 , 33 , 46 + sz 91 & td & 65 & yes & 25 au & 1.4 & 8 , 21 , 34 + tw hya & td & 4 & no & 3 au & 1.8 & 9 , 22 , 35 , 40 + ux tau a & ptd & 25 & ... & 6 au & 1.1 & 2 , 25 , 40 , 47 + v4046 sgr & td & 29 & ... & binary & 5.0 & 10 , 36 , 48 + doar 44 & ptd & 30 & no & 8 au & 3.7 & 2 , 23 , 33 , 38 + lkh 330 & ptd & 68 & no & 8 au & 2.2 & 2 , 24 , 33 , 38 + wsb 60 & ptd & 15 & ... & 25 au & 3.7 & 2 , 37 , 46 -.15 in _ model fitting _ detailed modeling of many of the above - mentioned seds has been performed in order to infer the structure of these disks .seds of transitional disks ( i.e. , objects with little or no nir and mir emission ) have been fit with models of inwardly truncated optically thick disks ( e.g. , * ? ? ? * ; * ? ? ?* ) . the inner edge or `` wall '' of the outer disk is frontally illuminated by the star , dominating most of the emission seen in the irs spectrum , particularly from .some of the holes in transitional disks are relatively dust - free ( e.g. , dm tau ) while sed model fitting indicates that others with strong 10 silicate emission have a small amount of optically thin dust within their disk holes to explain this feature ( e.g. , gm aur ; * ? ? ?beyond , transitional disks have a contribution to their seds from the outer disk . in pre - transitional disks ,the observed sed can be fit with an optically thick inner disk separated by an optically thin gap from an optically thick outer disk ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?there is an inner wall located at the dust sublimation radius which dominates the nir ( 25 ) emission and should cast a shadow on the outer disk ( see 2.4 ) . in a few cases ,the optically thick inner disk of pre - transitional disks has been confirmed using nir spectra following the methods of .like the transitional disks , there is evidence for relatively dust - free gaps ( e.g. , ux tau a ) as well as gaps with some small , optically thin dust to explain strong 10 silicate emission features ( e.g. , lkca 15 ; * ? ? ?the seds of evolved disks can be fit with full disk models , particularly in which the dust is very settled towards the midplane ( e.g. , * ? ? ?ori , 3 myr ; ori ob1b , 7 myr ; ori ob1a , 10 myr .the hatched region corresponds to stellar photospheric colors .the error bars represent the median and quartiles of taurus objects ( i.e. , where most full disks are expected to lie ) .well characterized transitional disks ( gm aur , coku tau , cvso224 ; * ? ? ?* ) and pre - transitional disks ( lkca 15 , ux tau ) are indicated with asterisks .the dotted lines correspond to the lower quartile of disk emission in ori , and roughly separate the evolved disks ( lower left ) from the transitional disks ( lower right ) .note that the pre - transitional disks do not lie below the dotted line , highlighting that it is harder to identify disk gaps based on colors alone .figure adapted from ., title="fig:",width=309 ] + there are many degeneracies to keep in mind when interpreting sed - based results .first , there is a limit to the gap sizes that can be detected with __ irs . over 80 of the emission at 10 comes from within 1 au in the disk ( e.g. , * ? ? ?therefore , _ spitzer _irs is most sensitive to clearings in which a significant amount of dust located at radii au has been removed , and so it will be easier to detect disks with holes ( i.e. , transitional disks ) as opposed to disks with gaps ( i.e. , pre - transitional disks ) .the smallest gap in the innermost disk that will cause a noticeable `` dip '' in the _ spitzer _ spectrum would span .3 - 4 au .it would be very difficult to detect gaps whose inner boundary is outside of 1 au ( e.g. , a gap spanning 510 au in the disk ; * ? ? ? * ) .therefore , with current data we can not exclude that any disk currently thought to be a full disk contains a small gap nor can we exclude that currently known ( pre-)transitional disks have additional clearings at larger radii ( e.g. , * ? ? ? * ) .it will be largely up to _ alma _ and the next generation of ir interferometers to detect such small disk gaps ( e.g. , * ? ? ? * ) .one should also keep in mind that millimeter data are necessary to break the degeneracy between dust settling and disk mass ( see * ? ? ? * ) .also , the opacity of the disk is controlled by dust and in any sophisticated disk model the largest uncertainty lies in the adopted dust opacities .we will return to disk model limitations in implications for color - color diagrams _another method of identifying transitional disks is through color - color diagrams ( fig .[ figcolorcolor ] ) .this method grew in usage as more _irac and mips data became available and ( pre-)transitional disks well characterized by _ spitzer _ irs spectra could be used to define the parameter space populated by these objects .in these diagrams , transitional disks are distinct from other disks since they have nir colors or slopes ( generally taken between two irac bands , or k and an irac band ) significantly closer to stellar photospheres than other disks in taurus , but mir colors ( generally taken between k or one irac band and mips [ 24 ] ) comparable or higher than other disks in taurus ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?color - color diagrams are limited in their ability to identify pre - transitional disks because their fluxes in the nir are comparable to many other disks in taurus . irs data can do a better job of identifying pre - transitional disks using the equivalent width of the 10 feature or the nir spectral index ( e.g. , n ) versus the mir spectral index ( e.g. , n * ? ? ?* ; * ? ? ?* ; * ? ? ?evolved disks are easier to identify in color - color diagrams since they show excesses over their stellar photospheres that are consistently lower than most disks in taurus , both in the near _ and _ mid - ir . in the future ,_ jwst _ s sensitivity will allow us to expand _ spitzer _s sed and color - color work to many more disks , particularly to fainter objects in older and farther star - forming regions , greatly increasing the known number of transitional , pre - transitional , and evolved disks .upcoming high - resolution imaging surveys with ( sub)mm facilities in the near - future ( i.e. , alma ) and ir interferometers further in the future ( i.e. , vlt ) will give us a better understanding of the small - scale spatial structures in disks which seds can not access . * 2.2 submillimeter continuum imaging * the dust continuum at ( sub)millimeter wavelengths is an ideal probe of cool material in disks . at these wavelengths ,dust emission dominates over the contribution from the stellar photosphere , ensuring that contrast limitations are not an issue .moreover , interferometers give access to the emission structure on a wide range of spatial scales , and will soon provide angular resolution that regularly exceeds 100mas .the continuum emission at these long wavelengths is also thought to have relatively low optical depths , meaning the emission morphology is sensitive to the density distribution of mm and cm - sized grains .these features are especially useful for observing the dust - depleted inner regions of ( pre-)transitional disks , as will be illustrated in the following subsections . _( sub)mm disk cavities _ with sufficient resolution ,the ( sub)mm dust emission from disks with cavities exhibits a ring "- like morphology , with limb - brightened ansae along the major axis for projected viewing geometries . in terms of the actual measured quantity ,the interferometric visibilities , there is a distinctive oscillation pattern ( effectively a bessel function ) where the first null " is a direct measure of the cavity dimensions ( see * ? ? ?* ) . as of this writing, roughly two dozen disk cavities have been directly resolved at ( sub)mm wavelengths .a gallery of representative continuum images , primarily from observations with the submillimeter array ( sma ) , is shown in fig .[ figmm ] .for the most part , these discoveries have been haphazard : some disks were specifically targeted based on their infrared seds ( 2.1 ; e.g. , * ? ? ?* ) , while others were found serendipitously in high resolution imaging surveys aimed at constraining the radial distributions of dust densities ( e.g. , * ? ? ?perhaps the most remarkable aspect of these searches is the frequency of dust - depleted disk cavities at ( sub)mm wavelengths , especially considering that the imaging census of all disks has so far been severely restricted by both sensitivity and resolution limitations . in the nearest star - forming regions accessible to the northern hemisphere ( taurus and ophiuchus ) , only about half of the disks in the bright half of the mm luminosity ( mass ) distribution have been imaged with sufficient angular resolution ( .3 ) to find large ( in radius ) disk cavities .even with these strong selection biases , the incidence of resolved cavities is surprisingly high compared to expectations from ir surveys ( see 4 ) . estimated that at least 1 in 3 of these mm - bright ( massive ) disks exhibit large cavities ._ model fitting _ the basic structures of disk cavities can be quantified through radiative transfer modeling of their seds ( see 2.1 ) simultaneously with resolved mm data .these models often assume that the cavity can be described as a region of sharply reduced dust surface densities ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?such work finds cavity radii of , depletion factors in the inner disk of relative to optically thick full disks , and outer regions with sizes and masses similar to those found for full disks .however , there are subtleties in this simple modeling prescription .first , the depletion levels are usually set by the infrared sed , not the mm data : the resolved images have a limited dynamic range and can only constrain an intensity drop by a factor .second , and related , is that the sharpness " of the cavity edge is unclear .the most popular model prescription implicitly imposes a discontinuity , but the data only directly indicate that the densities substantially decrease over a narrow radial range ( a fraction of the still - coarse spatial resolution ; ) .alternative models with a smoother taper at the cavity edge can explain the data equally well in many cases ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , and might alleviate some of the tension with ir scattered light measurements ( see 2.3 ) .some additional problematic issues with these simple models have been illuminated , thanks to a new focus on the details of the resolved mm data .for example , in some disks the dust ring morphology is found to be remarkably narrow with nearly all of the emission coming from a belt 10 - 20au across ( or less ) even as we trace gas with molecular line emission extending hundreds of au beyond it ( e.g. , * ? ? ? * ) .this hints at the presence of a particle trap " near the cavity edge , as might be expected from local dynamical interactions between a planet and the gas disk ( see 3.2 ; * ? ? ?* ; * ? ? ?in a perhaps related phenomenon ( e.g. , * ? ? ? * ; * ? ? ?* ) , new high - fidelity images of ( pre-)transitional disks are uncovering evidence that strong azimuthal asymmetries are common features of the mm emission rings . band ( top ; 1.6 m ) along with their averaged radial surface brightness profiles along the major disk axis ( bottom ) . broken lines in the top and bottom panels correspond to the radius of the outer wall as measured with submm imaging and seds .we note that the regions of the inner disk that can not be resolved are masked out in the panels . from left to right the objects are as follows : sr 21 , rx j1852 ( _ kudo et al ._ , in prep . ) , rx j1604 .[ figseeds],width=8 ] a number of pressing issues will soon be addressed by the alma project . regarding the incidence of the disk holes and gaps, an expanded high resolution imaging census should determine the origin of the anomalously high occurrence of dust cavities in mm - wave images .if the detection rate estimated by is found to be valid at all luminosities , it would confirm that even the small amount of dust inside the disk cavities sometimes produces enough ir emission to hide the standard ( pre-)transitional disk signature at short wavelengths , rendering ir selection inherently incomplete something already hinted at in the current data .perhaps more interesting would be evidence that the ( pre-)transitional disk frequency depends on environmental factors , like disk mass ( i.e. , a selection bias ) or stellar host properties ( e.g. , * ? ? ? * ) .more detailed analyses of the disk structures are also necessary , both to develop a more appropriate modeling prescription and to better characterize the physical processes involved in clearing the disk cavities . specific efforts toward resolving the depletion zone at the cavity boundary , searching for material in the inner disk , determining ring widths and measuring their mm / radio colors to infer the signatures of particle evolution and trapping ( e.g. , * ? ? ?* ; * ? ? ?* ) , and quantifying the ring substructures should all impart substantial benefits on our understanding of disks . *2.3 infrared imaging * ir imaging has been used successfully to observe disks around bright stars .space - based observations free from atmospheric turbulence ( e.g. , hst ) have detected fine disk structure such as spiral features in the disk of hd 100546 and a ring - like gap in hd 141569 .mir images of ( pre-)transitional disks are able to trace the irradiated outer wall , which effectively emits thermal radiation ( e.g. , * ? ? ?* ; * ? ? ?more recently , high - resolution nir polarimetric imaging and ir interferometry has become available , allowing us to probe much further down into the inner few tens of au in disks in nearby star - forming regions .here we focus on nir polarimetric and ir interferometric imaging of ( pre-)transitional disks and also the results of ir imaging searches for companions in these objects . _ nir polarimetric imaging _nir polarimetric imaging is capable of tracing the spatial distribution of submicron - sized dust grains located in the uppermost , surface layer of disks .one of the largest nir polarimetry surveys of disks to date was conducted as part of the seeds ( strategic explorations of exoplanets and disks with subaru ; * ? ? ?* ) project .there is also vlt imaging work , predominantly focusing on disks around herbig ae stars .these surveys access the inner tens of au in disks , reaching a spatial resolution of 0.06 ( 8 au ) in nearby star - forming regions .such observations have the potential to reveal fine structures such as spirals , warps , offsets , gaps , and dips in the disk .most of the ( pre-)transitional disks around tts observed by seeds have been resolved .this is because the stellar radiation can reach the outer disk more easily given that the innermost regions of ( pre-)transitional disks are less dense than full disks . many of these disks can be sorted into the three following categories based on their observed scattered light emission ( i.e. , their `` polarized intensity '' appearance ) at 1.6 m : _ ( a ) no cavity in the nir with a smooth radial surface brightness profile at the outer wall _( e.g , sr 21 ; doar 44 ; rx j1615 ; fig .[ figseeds]a , b ) , _ ( b ) similar to category a , but with a broken radial brightness profile _ ( e.g. tw hya ; rx j1852.3 ; lkh 330 ) .these disks display a slight slope in the radial brightness profile in the inner portion of the disk , but a steep slope in the outer regions ( fig .[ figseeds]c , d ) , _ ( c ) a clear cavity in the nir polarized light _( e.g. , gm aur ; sz 91 ; pds 70 ; rx j1604 - 2130 ; fig .[ figseeds]e , f ) .+ the above categories demonstrate that the spatial distribution of small and large dust grains in the disk are not necessarily similar .based on previous submm images ( see 2.2 ) , the large , mm - sized dust grains in the inner regions of each of the above disks is significantly depleted . however , in categories a and b there is evidence that a significant amount of small , submicron sized dust grains remains in the inner disk , well within the cavity seen in the submm images . in category c ,the small dust grains appear to more closely trace the large dust distribution , as both are significantly depleted in the inner disk .one possible mechanism that could explain the differences between the three categories presented above is dust filtration ( e.g. , * ? ? ?* ; * ? ? ?* ) , which we will return to in more detail in 3.2 .more high - resolution imaging observations of disks at different wavelengths is necessary to develop a fuller picture of their structure given that the disk s appearance at a certain wavelength depends on the dust opacity . _ ir interferometric imaging _ir interferometers , such as the very large telescope interferometer ( vlti ) , the keck interferometer ( ki ) , and the chara array , provide milliarcsecond angular resolution in the nir and mir regime ( 113 m ) , enabling new constraints on the structure of ( pre-)transitional disks .such spatially resolved studies are important to reveal complex structure in the small dust distribution within the innermost region of the disk , testing the basic constructs of models that have been derived based on spatially unresolved data ( e.g. , seds ) .the visibility amplitudes measured with interferometry permit direct constraints on the brightness profile , and , through radiative transfer modeling ( see 2.1 and 2.2 for discussion of limitations ) , on the distribution and physical conditions of the circumstellar material . the nir emission ( and band , m ) in the pre - transitional disks studied most extensively with ir interferometry ( i.e. , hd100546 , tcha , and v1247ori ) is dominated by hot optically thick dust , with smaller contributions from scattered light and optically thin dust emission .the measured inner disk radii are in general consistent with the expected location of the dust sublimation radius , while the radial extent of this inner emission component varies significantly for different sources ( hd100546 : 0.244au , ; tcha : 0.070.11au , ; v1247ori : 0.180.27au , ). the mir regime ( band , m ) is sensitive to a wider range of dust temperatures and stellocentric radii . in the transitional disk of twhya ,the region inside of is found to contain only optically thin dust , followed by an optically thick outer disk , in agreement with sed modeling and ( sub)mm imaging . the gaps in the pre - transitional disks of the herbig ae star hd100546 and the tts tcha were found to be highly depleted of ( sub) - sized dust grains , with no significant nir or mir emission , consistent with sed - based expectations ( i.e. , no substantial 10 silicate emission ) .the disk around the herbig ae star v1247ori , on the other hand , exhibits a gap filled with optically thin dust .the presence of such optically thin material within the gap is not evident from the sed , while the interferometric observations indicate that this gap material is the dominant contributor at mir wavelengths .this illustrates the importance of ir interferometry for unraveling the physical conditions in disk gaps and holes .we note that besides the dust continuum emission , some ( pre-)transitional disks exhibit polycyclic aromatic hydrocarbon ( pah ) spectral features , for instance at 7.7 m , 8.6 m , and 11.3 m . for a few objects , it was possible to locate the spatial origin of these features using adaptive optics imaging or midi long - baseline interferometry .these observations showed that these molecular bands originate from a significantly more extended region than the nir continuum emission , including the gap region and the outer disk .this is consistent with the scenario that these particles are transiently heated by uv photons and can be observed over a wide range of stellocentric radii .one of the most intriguing findings obtained with ir interferometry is the detection of non - zero phase signals , which indicate the presence of significant asymmetries in the inner , au - scale disk regions .keck / nirc2 aperture masking observations of v1247ori revealed asymmetries whose direction is not aligned with the disk minor axis and also changes with wavelength .therefore , these asymmetries are neither consistent with a companion detection , nor with disk features . instead , these observations suggest the presence of complex , radially extended disk structures , located within the gap region .it is possible that these structures are related to the spiral - like inhomogeneities that have been detected with coronagraphic imaging on about 10-times larger scales ( e.g. , * ? ? ?* ; * ? ? ?* ) and that they reflect the dynamical interaction of the gap - opening body with the disk material . studying these complex density structures and relating the asymmetries to the known spectro - photometric variability of these objects ( 2.4 ) will be a major objective of future interferometric imaging studies .the major limitations from the existing studies arise from sparse -coverage , which has so far prevented the reconstruction of direct interferometric images for these objects .different strategies have been employed in order to relax the -coverage restrictions , including the combination of long - baseline interferometric data with single - aperture interferometry techniques ( e.g. , speckle and aperture masking interferometry ; ) and the combination of data from different facilities .truly transformational results can be expected from the upcoming generation of imaging - optimized long - baseline interferometric instruments , such as the 4-telescope mir beam combiner matisse , which will enable efficient long - baseline interferometric imaging on scales of several au . _companion detections _nir imaging observations can directly reveal companions within the cleared regions of disks . both theory and observationshave long shown that stellar binary companions can open gaps ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , based on numerous moderate - contrast companions ( magnitudes of contrast , or companion masses ) that have been identified with rv monitoring , hst imaging , adaptive optics imaging , and speckle interferometry . for example , nir imaging of coku tau , a star surrounded by a transitional disk , revealed a previously unknown stellar - mass companion that is likely responsible for the inner clearing in this disk , demonstrating that it is very important to survey stars with ( pre-)transitional disks for binarity in addition to exploring other possible clearing mechanisms ( see 3.2 ; * ? ? ? * ) .the detection of substellar or planetary companions has been more challenging , due to the need for high contrast ( to achieve for a 1 primary star ) near or inside the formal diffraction limit of large telescopes .most of the high - contrast candidate companions identified to date have been observed with interferometric techniques such as nonredundant mask interferometry ( nrm ; * ? ? ? * ; * ? ? ? * ) , which measure more stable observable quantities ( such as closure phase ) to achieve limits of at ( 35 at 8 au ) .the discoveries of nrm include a candidate planetary - mass companion to lkca 15 and a candidate low - mass stellar companion to t cha .a possible candidate companion was reported around fl cha , although these asymmetries could be associated with disk emission instead .advanced imaging techniques also are beginning to reveal candidate companions at intermediate orbital radii that correspond to the optically thick outer regions of ( pre-)transitional disks ( e.g. , * ? ? ?* ) , beyond the outer edge of the hole or gap region .the flux contributions of companions can be difficult to distinguish from scattered light due to disk features .however , the case of lkca 15 shows that the planetary hypothesis can be tested using multi - epoch , multi - wavelength data ( to confirm colors and keplerian orbital motion ) and by direct comparison to resolved submm maps ( to localize the candidate companion with respect to the inner disk edge ) .even with the enhanced resolution and contrast of techniques like nrm , current surveys are only able to probe super - jupiter masses in outer solar systems . for bright stars ( ) , upcoming extreme adaptive optics systems like gpi and sphere will pave the way to higher contrasts with both imaging and nrm , achieving contrasts of at ( at 10 au ) . however , most young solar - type and low - mass stars fall below the optical flux limits of extreme ao .further advances for those targets will require observations with jwst that probe the sub - jupiter regime for outer solar systems ( at , or at au ) or with future ground - based telescopes that probe the jupiter regime near the snow line ( achieving at or at au ) . + * 2.4 time domain studies * ir variability in tts is ubiquitous and several ground - based studies have been undertaken to ascertain the nature of this variability ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . with the simultaneous mir wavelength coverage provided by_ spitzer _irs , striking variability in ( pre)-transitional disks was discovered , suggestive of structural changes in these disks with time .we review this variability along with the mechanisms that have been proposed to be responsible for it in the following subsections ._ `` seesaw '' variability _ the flux in many pre - transitional disks observed for variability to date `` seesaws , '' i.e. , as the emission decreases at shorter wavelengths in the irs data , the emission increases at longer wavelengths ( fig .[ figvar1 ] ; * ? ? ?* ; * ? ? ?* ; * ? ? ?mir variability with irs was also seen in some transitional disks ( e.g. , gm aur and lrll 67 ; * ? ? ?* ; * ? ? ?* ) , though in these objects the variability was predominantly around the region of the silicate emission feature .typically , the flux in the pre - transitional disks and transitional disks observed changed by about 10 between epochs , but in some objects the change was as high as 50 .this variability may point to structural changes in disks . sed modeling can explain the seesaw behavior seen in pre - transitional disks by changing the height of the inner wall of these disks ( fig .[ figvar2 ] ; * ? ? ?* ) . when the inner wall is taller , the emission at the shorter wavelengths is higher since the inner wall dominates the emission at 2 - 8 .the taller inner wall casts a larger shadow on the outer disk wall , leading to less emission at wavelengths beyond 20 where the outer wall dominates . when the inner wall is shorter , the emission at the shorter wavelengths is lower and the shorter inner wall casts a smaller shadow on the outer disk wall , leading to more emission at longer wavelengths .this `` seesaw '' variability confirms the presence of optically thick material in the inner disk of pre - transitional disks .the variability seen in transitional disks may suggest that while the disk is vertically optically thin , there is a radially optically thick structure in the inner disk , perhaps composed of large grains and limited in spatial extent so that it does not contribute substantially to the emission between 15 while still leading to shadowing of the outer disk .one intriguing possibility involves accretion streams connecting multiple planets , as predicted in the models of and claimed to be seen by alma ._ possible underlying mechanisms _ .bottom : percentage change in flux between the two irs spectra above .the observed variability can not be explained by the observational uncertainties of irs ( error bars ) .figure adapted from ., width=309 ] when comparing the nature of the observed variability to currently known physical mechanisms , it seems unlikely that star spots , winds , and stellar magnetic fields are the underlying cause .the star spots proposed to explain variability at shorter wavelengths in other works ( e.g. , * ? ? ?* ) could change the irradiation heating , but this would cause an overall increase or decrease of the flux , not seesaw variability . a disk wind which carries dust may shadow the outer disk .however , do not find evidence for strong winds in their sample which displays seesaw variability .stellar magnetic fields that interact with dust beyond the dust sublimation radius may lead to changes if the field expands and contracts or is tilted with respect to plane of the disk .however , it is thought that the stellar magnetic field truncates the disk within the corotation radius and for many objects .the corotation radius is within the dust sublimation radius , making it unlikely that the stellar magnetic field is interacting with the dust in the disk .it is unclear what role accretion or x - ray flares may play in disk variability .accretion rates are known to be variable in young objects , but do not find that the observed variations in accretion rate are large enough to reproduce the magnitude of the mir variability observed .strong x - ray flares can increase the ionization of dust and lead to a change in scale height .however , while tts are known to have strong x - ray flares , it is unlikely that all of the mir disk variability observed overlapped with strong x - ray flares . .red corresponds to visible areas of the disk wall , while light blue and dark blue areas are in the penumbra and umbra , respectively .the mir variability observed in pre - transitional disks can be explained by changes in the height of the inner disk wall , which results in variable shadowing of the outer wall ., width=317 ] the mir variability seen is most likely due to perturbations in the disk , possibly by planets or turbulence .planets are thought to create spiral density waves in the disk ( see ppv review by * ? ? ?* ) , which may have already been detected in disks ( e.g. , * ? ? ?* ; * ? ? ?such spiral density waves may affect the innermost disk , causing the height of the inner disk wall to change with time and creating the seesaw variability observed .the timescales of the variability discussed here span years down to 1 week or less .if the variability is related to an orbital timescale , this corresponds to au and .07 au in the disk , plausible locations for planetary companions given our own solar system and detections of hot jupiters .turbulence is also a viable solution .magnetic fields in a turbulent disk may lift dust and gas off the disk .the predicted magnitude of such changes in the disk scale height are consistent with the observations .observations with _jwst _ can explore the range of timescales and variability present in ( pre-)transitional disks to test all of the scenarios explored above .it is also likely that a diverse range of variability is present in most disks around young stars ( e.g. , ysovar ; * ? ? ?* ) and _ jwst _ observations of a wide range of disks will help us more fully categorize disk variability . *2.5 gaseous emission * = 10 emission from v836 tau ( top ) and lkca 15 ( bottom ) after correction for stellar co absorption .the double - peaked profile of v836 tau indicates that the gas emission is truncated beyond .the centrally - peaked lkca 15 profile indicates gas emission extending to much larger radii ., width=309 ] the structure of the gas in ( pre-)transitional disks is a valuable probe , because the mechanisms proposed to account for the properties of these objects ( [ sec : theory ] ; e.g. , grain growth , photoevaporation , companions ) may impact the gas in different ways from the dust . the chapters in this volume by _ pontoppidan et al ._ and _ alexander et al . _ as well as the earlier ppv review by ,describe some of the available atomic and molecular diagnostics and how they are used to study gas disks . in selecting among these specifically for the purpose of probing radial disk structure , it is important to consider how much gas is expected to remain in a hole or gap .although theoretical studies suggest the gas column density is significantly reduced in the cleared dust region ( e.g. , by ; * ? ? ?* ) , given a typical tts s disk column density of 100g/ at 1au , a fairly hefty gas column density could remain .many stars hosting ( pre-)transitional disks also show signs of significant gas accretion ( 4.2 ) .these considerations suggest that molecular diagnostics , which probe larger disk column densities ( 1 ) , are more likely to be successful in detecting a hole or gap in the gas disk . in the following ,we review what is known to date about the radial structure of gas in disks ._ mir and ( sub)mm spectral lines _ as there are now gas diagnostics that probe disks over a wide range of radii ( ) , the presence or absence of these diagnostics in emission can give a rough idea of whether gas disks are radially continuous or not . at large radii , the outer disks of many ( pre-)transitional disks ( tw hya , gm aur , dm tau , lkca 15 , etc . )are well studied at mm wavelengths ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* see also 2.2 ) .for some , the mm observations indicate that rotationally excited co exists within the cavity in the dust distribution ( e.g. , tw hya , lkca 15 ; * ? ? ? * ; * ? ? ? * ) , although these observations can not currently constrain how that gas is distributed .many ( pre-)transitional disks also show multiple signatures of gas close to the star : ongoing accretion ( 4.2 ) , uv h emission ( e.g. , * ? ? ?* ; * ? ? ?* ) , and rovibrational co emission .in contrast , the 10 - 20 m _ spitzer _ spectra of these objects conspicuously lack the rich molecular emission ( e.g. , h , c , hcn ) that characterizes the mid - infrared spectra of full disks around classical tts ( ctts , i.e. , stars that are accreting ; * ? ? ?* ; * ? ? ?as these mir molecular diagnostics probe radii within a few au of the star , their absence is suggestive of a missing molecular disk at these radii , i.e. , a gap between an inner disk ( traced by co and uv h ) and the outer disk ( probed in the mm ) .alternatively , the disk might be too cool to emit at these radii , or the gas may be abundant but in atomic form. further work , theoretical and observational , is needed to evaluate these possibilities ._ velocity resolved spectroscopy _several approaches can be used to probe the distribution of the gas in greater detail . in the absence of spatially resolved imaging , which is the most robust approach , velocity resolved spectroscopy coupled with the assumption of keplerian rotationcan probe the radial structure of gaseous disks ( see ppiv review by * ? ? ?the addition of spectroastrometric information ( i.e. , the spatial centroid of spectrally resolved line emission as a function of velocity ) can reduce ambiguities in the disk properties inferred with this approach . these techniques have been used to search for sharp changes in gaseous emission as a function of radius as evidence of disk cavities , and to identify departures from azimuthal symmetry such as those created by orbiting companions . velocity resolved spectroscopy of co rovibrational emission provides tentative evidence for a truncated inner gas disk in the evolved disk v836 tau . an optically thin gap , if there is one , would be found at small radii ( ) and plausibly overlap the range of disk radii probed by the co emission .indeed , the co emission from v836 tau is unusual in showing a distinct double - peaked profile consistent with the truncation of the co emission beyond ( fig .[ figco ] ; * ? ? ?* ) . in comparison ,other disks show much more centrally peaked rovibrational co line profiles .spectroscopy also indicates possible differences in the radial extent of the gas and dust in the inner disk . the co emission profile from the pre - transitional disk of lkca 15 ( fig . [ figco ] ; * ?* ) spans a broad range of velocities , indicating that the inner gas disk extends over a much larger range of radii ( from out to several au ) than in v836 tau .this result might be surprising given that sed modeling suggests that the inner optically thick dust disk of lkca 15 extends over a narrow annular region ( 0.150.19au ; * ? ? ?the origin of possible differences in the radial extent of the gas and dust in the inner disk region is an interesting topic for future work .some of the best evidence to date for the truncation of the outer gas disks of ( pre-)transitional disks comes from studies of herbig ae stars . for nearby systems with large dust cavities , ground - based observationscan spatially resolve the inner edge of the co rovibrational emission from the outer disks around these bright stars ( e.g. , hd141569 ; * ? ? ?line profile shapes and constraints from uv fluorescence modeling have also been used to show that rovibrational co and oh emission is truncated at the same radius as the disk continuum in some systems , although the radial distribution of the gas and dust appear to differ in other systems ( e.g. , irs 48 ; * ? ? ?the gas disk may sometimes be truncated significantly inward of the outer dust disk and far from the star .for example , the co emission from sr 21 extends much further in ( to au ) than the inner hole size of au .several unusual aspects of the gaseous emission from hd100546 point to the possibility that the cavity in the molecular emission is created by an orbiting high mass giant planet . inferred the presence of a gap in the gas disk based on a local minimum in the [ ] 6300 emission from the disk . the rovibrational oh emission from hd100546is found to show a strong line asymmetry that is consistent with emission from an eccentric inner rim ( ; * ? ? ?eccentricities of that magnitude are predicted to result from planet - disk interactions ( e.g. , * ? ? ?* ; * ? ? ?in addition , the co rovibrational emission varies in its spectroastrometric signal .the variations can be explained by a co emission component that orbits the star just within the 13au inner rim ( fig .[ figoh ] ; * ? ? ?the required emitting area ( 0.1au ) is similar to that expected for a circumplanetary disk surrounding a planet at that distance ( e.g. , * ? ? ?* ; * ? ? ?further studies with alma and elts will give us new opportunities to explore the nature of disk holes and gaps in the gas disk ( e.g. , * ? ? ?* ; * ? ? ?there are three main observational constraints every theory must confront when attempting to explain ( pre-)transitional disk observations .first , the cleared regions in ( pre-)transitional disks studied to date are generally large ( 2.1 , 2.2 ) . for transitional disks, the optically thin region extends from tens of au all the way down to the central star . for pre - transitional disks ,sed modeling suggests that the optically thick region may only extend up to 1 au , followed by a gap that is significantly depleted of dust up to tens of au , as seen in the transitional disks . at the same time, submm imaging has revealed the existence of some disks which have inner regions significantly depleted of large dust grains without exhibiting mir sed deficits ( 2.2 ) , suggesting that a large amount of small dust grains still remain in the inner disk , as also shown by nir polarimetric images ( 2.3 ) .second , in order to be discernible in the sed , the gaps need to be optically thin which implies that the mass of small dust grains ( of order a micron or less ) must be extremely low .third , while most ( pre-)transitional disks accrete onto the central star at a rate which is lower than the median accretion rate for tts ( ; * ? ? ?* ) , their rates are still substantial ( 4.2 ) .this indicates that considerable gas is located within the inner disk , although we currently do not have many constraints on how this gas is spatially distributed . in the following sections , we review the clearing mechanisms that have been applied to explain ( pre-)transitional disk observations in light of the above constraints . * 3.1 non - dynamical clearing mechanisms * a diverse set of physical mechanisms has been invoked to explain ( pre-)transitional disk observations , with varying levels of success .much of the attention is focused on dynamical interactions with one or more companion objects , although that subject will be addressed separately in 3.2 .there are a number of alternative scenarios that merit review here , including the effects of viscous evolution , particle growth and migration , and dispersal by ( photoevaporative ) winds . _viscous evolution _the interactions of gravitational and viscous torques comprise dominate the structural evolution of disks over most of their lifetimes .an anomalous kinematic viscosity , presumably generated by mhd turbulence , drives a persistent inward flow of gas toward the central star ( e.g. , * ? ? ? * ) .angular momentum is transported outward in that process , resulting in some disk material being simultaneously spread out to large radii .as time progresses , the disk mass and accretion rate steadily decline .to first order , this evolution is self - similar , so there is no preferential scale for the depletion of disk material . the nominal viscous evolution ( ) timescales at 10s of au are long , comparable to the stellar host ages . if the gas and dust were perfectly coupled , we would expect viscous evolution acting alone to first produce a slight enhancement in the fir mm sed ( due to the spreading of material out to larger radii ) and then settle into a slow , steady dimming across the sed ( as densities decay ) . coupled with the sedimentation of disk solids , these effects are a reasonable explanation for the evolved disks ( 2.1 ) . that said , there is no reason to expect that viscous effects alone are capable of the preferential depletion of dust at small radii needed to produce the large cleared regions typical of ( pre-)transitional disks . even in the case of enhanced viscosity in the outer wall due to the magneto - rotational instability ( mri * ? ? ?* ) , this needs a pre - existing inner hole to have been formed via another mechanism in order to be effective . _ grain growth _the natural evolution of dust in a gas - rich environment offers two complementary avenues for producing the observable signatures of holes and gaps in disks .first is the actual removal of material due to the inward migration and subsequent accretion of dust particles .this radial drift " occurs because thermal pressure causes the gas to orbit at subkeplerian rates , creating a drag force on the particles that saps their orbital energy and sends them spiraling in to the central star .swept up in this particle flow , the reservoir of emitting grains in the inner disk can be sufficiently depleted to produce a telltale dip in the ir sed ( e.g. , * ? ? ?a second process is related to the actual growth of dust grains . instead of a decrease in the dust densities , the inner disk only appears to be cleared due to a decrease in the grain emissivities : larger particles emit less efficiently ( e.g. , * ? ? ? * ; * ? ? ?because growth timescales are short in the inner disk , the ir emission that traces those regions could be suppressed enough to again produce a dip in the sed .the initial models of grain growth predicted a substantial reduction of small grains in the inner disk on short timescales , and therefore a disk clearing signature in the ir sed .more detailed models bear out those early predictions , even when processes that decrease the efficiency ( e.g. , fragmentation ) are taken into account . demonstrated that such models can account for the ir sed deficit of transitional disks by tuning the local conditions so that small ( - sized ) particles in the inner disk grow efficiently to mm cm sizes .however , those particles can not grow much larger before their collisions become destructive : the resulting population of fragments would then produce sufficient emission to wash out the infrared sed dip .the fundamental problem is that those large particles emit efficiently at mm cm wavelengths , so these models do not simultaneously account for the ring - like emission morphologies observed with interferometers ( fig .[ figmm ] ) . argued that the dilemma of this conflicting relationship between growth and the ir mm emission diagnostics means that particle evolution _alone _ is not the underlying cause of cavities in disks . a more in - depth discussion of these and other issues related to the observational signatures of grain growth and migration are addressed in the chapter by _ testi et al ._ in this volume. _ photoevaporation _another mechanism for sculpting ( pre-)transitional disk structures relies on the complementary interactions of viscous evolution , dust migration , and disk dispersal via photoevaporative winds . here, the basic idea is that the high - energy irradiation of the disk surface by the central star can drive mass - loss in a wind that will eventually limit the re - supply of inner disk material from accretion flows .once that occurs , the inner disk can rapidly accrete onto the star , leaving behind a large ( and potentially growing ) hole at the disk center .the detailed physics of this process can be quite complicated , and depend intimately on how the disk is irradiated . the chapter in this volume by _ alexander et al ._ provides a more nuanced perspective on this process , as well as on the key observational features that support its presumably important role in disk evolution in general , and the ( pre-)transitional disk phenomenon in particular .however , for the subsample of ( pre-)transitional disks that have been studied in detail , the combination of large sizes and tenuous ( but non - negligible ) contents of the inner disks ( 2.1 , 2.2 , 2.3 ) , substantial accretion rates ( 4.2 ) , and relatively low x - ray luminosities ( 4.3 ) indicate that photoevaporation does not seem to be a viable mechanism for the depletion of their inner disks ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?in addition , there exist disks with very low accretion rates onto the star that do not show evidence of holes or gaps in the inner disk . * 3.2 dynamical clearing by companions * much of the theoretical work conducted to explain the clearings seen in disks has focused on dynamical interactions with companions .when a second gravitational source is present in the disk , it can open a gap ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* see chapter by _ baruteau et al ._ in this volume ) .the salient issue is whether this companion is a star or a planet .it has been shown both theoretically and observationally that a stellar - mass companion can open a gap in a disk .for example , coku tau is surrounded by a transitional disk and it has a nearly equal mass companion with a separation of 8 au ( 2.3 ; * ? ? ? * ) .such a binary system is expected to open a cavity at 1624 au , which is consistent with the observations . herewe focus our efforts on discussing dynamical clearing by planets. a confirmed detection of a planet in a disk around a young star does not yet exist .therefore , it is less clear if dynamical clearing by planets is at work in some or most ( pre-)transitional disks . given the challenges in detecting young planets in disks , the best we can do at present is theoretically explore what observational signatures would be present if there were indeed planets in disks and to test if this is consistent with what has been observed to date , as we will do in the following subsections . _ maintaining gas accretion across holes and gaps _ a serious challenge for almost all theoretical disk clearing models posed to date is the fact that some ( pre-)transitional disks exhibit large dust clearings while still maintaining significant gas accretion rates onto the star . compared to other disk clearing mechanisms ( see 3.1 ) , gap opening by planets can more easily maintain gas accretion across the gap since the gravitational force of a planet can `` pull '' the gas from the outer disk into the inner disk .the gap s depth and the gas accretion rate across the gap are closely related .the disk accretion rate at any radius is defined as , where and are the gas surface density and radial velocity at .if we further assume = and the accretion rate across the gap is a constant , the flow velocity is accelerated by about a factor of 100 in a gap which is a factor of 100 in depth . in a slightly more realistic picture , = breaks down within the gap since the planet - disk interaction is highly asymmetric in the r- 2-d plane . inside of the gap, the flow only moves radially at the turnover of the horseshoe orbit , which is very close to the planet .this high velocity flow can interact with the circumplanetary material , shock with and accrete onto the circumplanetary disk , and eventually onto the protoplanet .due to the great complexity of this process , the ratio between the accretion onto the planet and the disk s accretion rate across the gap is unclear .thus , we will parameterize this accretion efficiency onto the planet as . after passing the planet ,the accretion rate onto the star is only of the accretion rate of the case where no planet is present .note that this parameterization assumes that the planet mass is larger than the local disk mass so that the planet migration is slower than the typical type ii rate . for a disk with and ,the local disk mass is 1.5 at 20 au . carried out 3-d viscous hydrodynamic simulations with a sink particle as the planet and found a of 0.9 . carried out 2-d viscous hydro - simulations , but depleted the circumplanetary material at different timescales , and found that can range between 0.1 - 0.9 depending on the circumplanetary disk accretion timescale .the accretion efficiency onto the planet plays an essential role in the accretion rate onto the star . _ explaining ir sed deficits _ regardless of the accretion efficiency , there is an intrinsic tension between significant gas accretion rates onto the star and the optically thin inner disk region in ( pre-)transitional disks .this is because the planet s influence on the accretion flow of the disk is limited to the gap region whose outer and inner edge hardly differ by a factor of more than 2 . after passing through the gap ,the inner disk surface density is again controlled only by the accretion rate and viscosity ( e.g. , mhd turbulence ) , similar to a full disk .since a full disk s inner disk produces strong nir emission , it follows that transitional disks should also produce strong nir emission , but they do not . with this simple pictureit is very difficult to explain transitional disks , which have strong nir deficits , compared to pre - transitional disks . for example , the transitional disk gm aur has very weak nir emission .it has an optically thin inner disk at 10 m with an optical depth of .01 and an accretion rate of . using a viscous disk model with , is derived to be 10100 g/ at 0.1 au .considering that the nominal opacity of ism dust at 10 m is 10 , the optical depth at 10 m for the inner disk is 1001000 , which is 45 orders of magnitude larger than the optical depth ( ) derived from observations . in order to resolve the conflict between maintaining gas accretion across large holes and gaps while explaining weak nir emission , several approaches are possible . in the following sections, we will outline two of these , namely multiple giant planets and dust filtration along with their observational signatures . _ a possible solution : multiple giant planets _ = 0.01 ,top ) and inviscid ( bottom ) simulations . in the viscously accreting disk ,the accretion flow carries small particles from the outer disk to the inner disk while big particles are deposited at the gap edge . in the inviscid disk ,particles are trapped in the vortex at the gap edge and the planet co - orbital region .figure adapted from and _ zhu et al ._ , in press .[ dynamic],width=8 ] one possibility is that multiple planets are present in ( pre-)transitional disks .if multiple planets from 0.1 au to tens of au can open a mutual gap , the gas flow can be continuously accelerated and passes from one planet to another so that a low disk surface density can sustain a substantial disk accretion rate onto the star .however , hydrodynamical simulations have shown that each planet pair in a multiple planet system will move into 2:1 mean motion resonance . even in a case with four giant planets , with the outermost one located at 20 au ,the mutual gap is from 2 - 20 au .therefore , to affect the gas flow at 0.1 au , we need to invoke an even higher number of giant planets . if there are multiple planets present in ( pre-)transitional disks , the planet accretion efficiency parameter ( ) can not be large in order to maintain a moderate accretion rate onto the star . with planets in the disk ,the accretion rate onto the star will be . if and the full disk has a nominal accretion rate , after passing two planets the accretion rate is 0.01 10=10 which is already below the observed accretion rates in ( pre-)transitional disks ( see 4.2 ) .on the other hand , if , even with four planets , the disk can still accrete at . thus , is one key parameter in the multi - planet scenario , which demands further study . _another possible solution : dust filtration _another possibility is that the small dust grains in the inner disk are highly depleted by physical removal or grain growth , to the point where the dust opacity in the nir is far smaller than the ism opacity .generally , this dust opacity depletion factor is to 10 .dust filtration is a promising mechanism to deplete dust to such a degree .dust filtration relies on the fact that dust and gas in disks are not perfectly coupled . due to gas drag, dust particles will drift towards a pressure maximum in a disk ( see chapter by _ johansen et al ._ in this volume ; * ? ?the drift speed depends on the particle size , and particles with the dimensionless stopping time drift fastest in disks .if the particle is in the epstein regime ( when a particle is smaller than molecule mean - free - path ) and has a density of 1 , is related to the particle size , , and the disk surface density , , as . in the outer part of the disk ( e.g. , au ) where 1 or 10 g , 1 cm particles have 1 or 0.1 .thus , submm and cm observations are ideal to reveal the effects of particle drift in disks . at the outer edge of a planet - induced gap , where the pressure reaches a maximum , dust particles drift outwards , possibly overcoming their coupling to the inward accreting gas .dust particles will then remain at the gap s outer edge while the gas flows through the gap .this process is called `` dust filtration '' , and it depletes dust interior to the semi - major axis of the planet - induced gap , forming a dust - depleted inner disk .dust trapping at the gap edge was first simulated by .however , without considering particle diffusion due to disk turbulence , mm and cm sized particles will drift from tens of au into the star within 200 orbits . included particle diffusion due to disk turbulence in 2-d simulations evolving over viscous timescales , where a quasi - steady state for both gas and dust has been achieved , and found that micron sized particles are difficult to filter by a jupiter mass planet in a disk . has done 1-d calculations considering dust growth and dust fragmentation at the gap edge and suggested that micron - sized particles may also be filtered .considering the flow pattern is highly asymmetric within the gap , 2-d simulations including both dust growth and dust fragmentation may be needed to better understand the dust filtration process .llllll [ tabchar ] viscous evolution & no hole & no hole & low accretion & low mass & no dependence + grain growth & no hole & no hole & unchanged & all masses & no dependence + photoevaporation & -radius hole & no gas within & no accretion & low mass & correlated + .1 m planet & gap & no hole & unchanged & all masses & no dependence + m planet & gap & gap & ctts & higher masses & no dependence + multiple giant planets & gap-radius hole & no gas within & no accretion & higher masses & no dependence -0.25 in dust filtration suggests that a deeper gap opened by a more massive planet can lead to the depletion of smaller dust particles .thus transitional disks may have a higher mass planet(s ) than pre - transitional disks .since a higher mass planet exerts a stronger torque on the outer disk , it may slow down the accretion flow passing the planet and lead to a lower disk accretion rate onto the star .this is consistent with observations that transitional disks have lower accretion rates than pre - transitional disks .furthermore , dust filtration can explain the observed differences in the relative distributions of micron sized dust grains and mm sized dust grains .disks where the small dust grains appear to closely trace the large dust distribution ( i.e. , category c disks from 2.3 ) could be in an advanced stage of dust filtration , after the dust grains in the inner disk have grown to larger sizes and can no longer be detected in nir scattered light .disks with nir scattered light imaging evidence for a significant amount of small , submicron sized dust grains within the cavities in the large , mm - sized dust grains seen in the submm ( i.e. , the category a and c disks from 2.3 ) , could be in an earlier stage of disk clearing via dust filtration .future observations will further our understanding of the many physical processes which can occur when a giant planet is in a disk , such as dust growth in the inner disk , dust growth at the gap edge , tidal stripping of the planets , and planets in the dead zone .in addition , here we have explored how to explain large holes and gaps in disks , yet there may be smaller gaps present in disks ( 2.1 ) that may not be observable with current techniques .the gap opening process is tied to the viscosity parameter assumed .if the disk is inviscid , a very low mass perturber can also open a small gap ( e.g. , 10 m ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?disk ionization structure calculations have indeed suggested the existence of such low turbulence regions in protoplanetary disks ( i.e. , the dead zone , see chapter by _ turner et al ._ in this volume ) , which may indicate that gaps in disks are common . interestingly , in a low viscosity disk , vortices can also be produced at the gap edge , which can efficiently trap dust particles ( fig .[ dynamic ] ; * ? ? ?* ; * ? ? ?* _ zhu et al . _ , in press ) and is observable with alma .we note that other physical processes can also induce vortices ( see chapter by _ turner et al ._ in this volume ; * ? ?* ; * ? ? ?* ) , thus the presence of vortices can not be viewed as certain evidence for the presence of a planet within a disk gap . in the near future , new observations with alma will provide further constraints on the distributions of dust and gas in the disk , which will help illuminate our understanding of the above points .basic unresolved questions underlie and motivate the study of disk demographics around tts .do all disks go through a ( pre-)transitional disk phase as stars evolve from disk - bearing stars to diskless stars ? assuming an affirmative answer , we can measure a _ transition timescale _ by multiplying the fraction of disk sources with ( pre-)transitional disk seds with the tts disk lifetime .are ( pre-)transitional disks created by a single process or multiple processes ?theory predicts that several mechanisms can generate ( pre-)transitional disk seds , all of which are important for disk evolution .table 2 summarizes the generic properties expected for ( pre-)transitional disks produced by different mechanisms , as previously described in [ sec : theory ] and the literature ( e.g. , * ? ? ?* ; * ? ? ?* ) . which of these occur commonly ( or efficiently ) and on what timescale(s ) ?these questions can , in principle , be addressed demographically because different mechanisms are predicted to occur more readily in different kinds of systems ( e.g. , those of different disk masses and ages ) and are expected to impact system parameters beyond the sed ( e.g. , the stellar accretion rate ) .we highlight developments along these lines below , taking into account points raised in 2 and 3 .chamaeleontis association ; , upper scorpius association .[ figfraction],width=309 ] * 4.1 frequency and the transition timescale * because different clearing mechanisms likely operate on different timescales , demographic studies aim to identify the processes that produce ( pre-)transitional disks in populations of different ages .one area that has been investigated thoroughly is the overall frequency of transitional disks . _spitzer _ surveys have had tremendous success in cataloging young stellar object ( yso ) populations , identifying dust emission from disks around stars within 500 pc across the full stellar mass range and into the brown dwarf regime at 3.6 to 8 m , and for most of the stellar mass range at 24 m .based on the photometric seds , a number of studies have identified transitional disk candidates , allowing demographic comparisons with stellar and disk properties .we note that reported transitional disk frequencies to date should be taken as a lower limit on the frequency of holes and gaps in disks given that , as noted in 2.1 , it is harder to identify pre - transitional disks based on photometry alone as well as smaller gaps in disks with current data .there has been some controversy regarding the transition timescale , which as noted above can be estimated from the frequency of transitional disks relative to the typical disk lifetime .early pre-_spitzer _ studies ( e.g. , * ? ? ?* ; * ? ? ?* ) suggested that the transition timescale was short , of order 10% of the total disk lifetime , a few years .the values estimated from _surveys of individual clusters forming regions have a wider range , from as small as these initial estimates to as long as a time comparable to the disk lifetime itself . combining data from multiple regions, found evidence for an increase in the transitional disk frequency as a function of mean cluster age .the discrepancies among these studies are largely the result of differing definitions of what constitutes a disk in transition , as well as how to estimate the total disk lifetime .the shorter timescales are typically derived using more restrictive selection criteria . in particular , the evolved disks , in which the ir excess is small at all observed wavelengths , tend to be more common around older stars ( myr ) and low - mass stars and brown dwarfs ; the inclusion of this sed type will lead to a larger transitional disk frequency and hence timescale for samples that are heavily weighted to these stellar types .moreover , the status of the evolved disks as _ bona fide _ ( pre-)transitional disks ( i.e. , a disk with inner clearing ) is somewhat in dispute ( see 2.1 ) . as pointed out by , the different sed types may indicate multiple evolutionary mechanisms for disks .more complete measurements of disk masses , accretion rates , and resolved observations of these objects are needed to better define their properties and determine their evolutionary status .combining data from multiple regions and restricting the selection to transitional disk seds showing evidence for inner holes , found evidence for an increase in the transition frequency as a function of mean cluster age .however , there was considerable uncertainty because of the small number statistics involved , especially at older ages where the total disk frequency is typically 10% or less .[ figfraction ] shows the fraction of transitional disks relative to the total disk population in a given region as a function of the mean age of each region .here we supplement the statistics from the irac flux - limited photometric surveys listed in with new results from irs studies of several star forming regions , and a photometric survey of the 11 myr - old upper scorpius association .a weak age dependence remains , with frequencies of a few percent at t 2 myr and percent at older ages .some of this correlation can be explained as a consequence of the cessation of star formation in most regions after myr ; as noted , the frequency of transitional disks relative to full disks in a region no longer producing new disks should increase as the number of full disks decreases with time .note that the sample selection does not include pre - transitional disks , which can not be reliably identified without mir spectroscopy .interestingly , for the regions with reasonably complete irs observations ( the four youngest regions in fig .[ figfraction ] ) , the transition fraction would increase to % if the pre - transitional disks were included .note also that the irs surveys represent the known membership of each region , and are reasonably complete down to spectral types of about m5 .we refer the reader to the cited articles for a full assessment of sample completeness .all transitional disks in this combined sample are spectroscopically confirmed pms stars .m for disks in ophiuchus .the dotted line indicates the locus occupied by ctts in taurus .disks with submm cavities and ( pre-)transitional disk seds are overlaid with a star ( sr 21 , bottom left ; doar 44 , middle ) ; the disk with a submm cavity but no obvious sed mir deficit ( sr 24s ) is overlaid with an `` x '' .sources with a are indicated with a blue box .objects where only upper limits were available for the accretion rates are indicated with an arrow ( including sr 21 , gss 26 , doar 25 ) .accretion rates are from if available , otherwise from , and ( scaled ) .disk masses are from ., width=309 ] an important caveat to the above statistics is that sed morphology is an imperfect tracer of disk structure and is most sensitive to large structures that span many au ( see 2.1 ) .smaller gaps can be masked by emission from even tiny amounts of dust . as mentioned in 2.2 , resolved submillimeter imaging has found a greater frequency of disks with inner dust cavities among the 1 - 2 myr - old stars in taurus and ophiuchus ( % ; * ? ? ?* ) than has been estimated from nir flux deficits alone .the statistics for older regions are incomplete , limited by weaker ( sub)millimeter emission that may be a result of significant grain growth compared to younger disks . improved statistics await more sensitive observations such as with alma . * 4.2 disk masses and accretion rates * another powerful tool has been to compare stellar accretion rates and total disk mass .looking at full disks around ctts and ( pre-)transitional disks in taurus , found that compared to single ctts , ( pre-)transitional disks have stellar accretion rates times lower at the same disk mass and median disk masses times larger . these properties are roughly consistent with the expectations for jupiter mass giant planet formation .some of the low accretion rate , low disk mass sources could plausibly be produced by uv photoevaporation , although none of the ( pre-)transitional disks ( gm aur , dm tau , lkca 15 , ux tau a ) are in this group .notably , none of these disks were found to overlap the region of the m plane occupied by most disks around ctts . with the benefit of work in the literature , a preliminary m plot can also be made for disks around ctts and ( pre-)transitional disks in ophiuchus ( fig .[ figmdotmass ] ) .most of the oph sources are co - located with the taurus ctts , while several fall below this group .the latter includes doar 25 and the transitional disk sr 21 .the other two sources have high extinction .such sources ( ; fig .[ figmdotmass ] , blue squares ) are found to have lower m , on average , for their disk masses .this is as might be expected , if scattered light leads to an underestimate of , and therefore the line luminosity from which m is derived .a source like doar 25 , which has a very low accretion rate for its disk mass and also lacks an obvious hole based on the sed or submillimeter continuum imaging , is interesting .does it have a smaller gap than can be measured with current techniques ?higher angular resolution observations of such sources could explore this possibility . like sr 21 ,both sr 24s and doar 44 have also been identified as having cavities in their submillimeter continuum , although their accretion rates place them in the ctts region of the plot .doar 44 has been previously identified as a pre - transitional disk based on its sed .whether sr 24s has a strong mir deficit in its sed is unknown , as its sed has not been studied in as much detail . what accounts for the ctts - like accretion rates of these systems ?one possibility is that these systems may be undergoing low mass planet formation .several studies have described how low mass planets can potentially clear gaps in the dust disk while having little impact on the gas , with the effect more pronounced for the larger grains probed in the submillimeter ( see 3.2 , also * ? ? ?* ; * ? ? ?* ) . in related work, recent studies find that ( pre-)transitional disks have lower stellar accretion rates on average than other disks in the same star forming region and that the accretion rates of pre - transitional disks are closer to that of ctts than the transitional disks .however , other studies find little difference between the accretion rates of ( pre-)transitional and other disks ( _ keane et al ., _ submitted ; * ? ? ?. this could be due to different sample selection ( i.e. , colors vs. irs spectra ) and different methods for calculating accretion rates ( i.e. , h vs. nir emission lines ) .more work has to be done to better understand disk accretion rates , ideally with a large ir and submm selected sample and consistent accretion rate measurement methods . * 4.3 stellar host properties * m4 .[ figspt],width=309 ] one would naively expect an age progression from full disks to ( pre-)transitional disks to diskless stars in a given region if these classes represent an evolutionary sequence .observational evidence for age differences , however , is decidedly mixed . from improved parallax measurements in taurus , found that the four stars surrounded by ( pre-)transitional disks with age measurements ( dm tau , gm aur , lkca 15 , ux tau a ) were intermediate between the mean ctts and weak tts ( i.e. , stars that are not accreting ) ages .however , the quoted age uncertainties are roughly equal to the ages of individual objects , and the small sample is not statistically robust .moreover , studies of other regions have found no statistical difference in ages between stars with and without disks ( ngc 2264 , orion ; * ? ? ?* ; * ? ? ?in general , the uncertainties associated with determining stellar ages remain too large , and the samples of ( pre-)transitional disks too small , to allow any definitive conclusion of systematic age differences for ( pre-)transitional disks . the frequency of transitional disks as a function of stellar mass also provides some clues to their origins . showed that transitional disks appeared to be underrepresented among mid- to late - m stars compared with the typical initial mass function of young stellar clusters . using the expanded sample adopted for fig .[ figfraction ] , this initial finding appears to be robust ( fig .[ figspt ] ) .the deficit remains even after adding the known pre - transitional disks .however , the evolved disks ( as opposed to the _ transitional _ disks ) do appear to be much more common around lower - mass stars ( e.g. , * ? ? ? * ) .as pointed out , there may be a strong bias here as optically thick inner disks around mid- to late - m stars produce less short - wavelength excess and can be confused with true inner disk clearing .nevertheless , this discrepancy may provide another indication of different clearing mechanisms operating in different transitional disks .for example , the relative dearth of ( pre-)transitional disks around lower mass stars may reflect a decreased rate of giant planet formation , as would be expected given their typically smaller initial disk masses . among other stellar properties ,the x - ray luminosity can provide further useful constraints . has been invoked as a major contributor to theoretical photoevaporation rates ( e.g. , * ? ? ?* ; * ? ? ?* ) , as well as accretion rates related to mri at the inner disk edge . examining the ( pre-)transitional disks in taurus and chamaleon i , found correlations between the size of the inner cleared regions and several stellar properties including stellar mass , x - ray luminosity , and mass accretion rate .these possibly point to the mri or photoevaporation mechanisms .however , known independent correlations between stellar mass , , and mass accretion rate may also be responsible . in a follow - up study with a large sample of 62 ( pre-)transitional disks in orion, attempted to correct for the stellar mass dependences and found no residual correlation between accretion rate or and the size of the inner hole .the measured properties of most ( pre-)transitional disks do not overlap with the ranges of parameter space predicted by these models . concluded that the demographics of their sample are most consistent with giant planet formation being the dominant process responsible for creating ( pre-)transitional disk seds . to better understand the connection between underlying physical mechanisms and observed disk demographics , more theoretical and observational work needs to be done .improved theoretical estimates on stellar ages , accretion efficiencies , and disk masses for different clearing mechanisms will aid in interpretation of the observations . at the same time , larger ground - based surveys are essential to confidently constrain basic properties such as stellar accretion rates , ages , and disk masses . in the near future ,the higher resolution of alma will reveal the extent of gaps in disks , leading to better statistics to measure the ( pre-)transitional disk frequency and the transition timescale ._ jwst _ will be key in addressing how the ( pre-)transitional frequency depends on age .the best constraints on disk demographics clearly require a multi - wavelength approach .it is thought that practically all stars have planets ( see chapter in this volume by _ fischer et al .if we start with the reasonable assumption that these planets formed out of disks , then the question is not if there are planets in disks around young stars , but what are their observable signatures . to datewe have detected large , optically thin holes and gaps in the dust distribution of disks and theoretical planet clearing mechanisms can account for some of their observed properties . these ( pre-)transitional disks have captured the interest of many scientists since they may be a key piece of evidence in answering one of the fundamental questions in astronomy : how do planets form ? while other disk clearing mechanisms ( e.g , grain growth , photoevaporation ; 3.1 ) should certainly be at play in most , if not all , disks ( and may even work more effectively in concert with planets ; * ? ? ?* ) , here we speculate what kinds of planets could be clearing ( pre-)transitional disks , beginning with massive giant planets .if a massive giant planet ( or multiple planets ) is forming in a disk , it will cause a large cavity in the inner disk since it will be very efficient at cutting off the inner disk from the outer disk .this will lead to significant depletion of small and large dust grains in the inner disk and lower accretion onto the central star ( 3.2 ) .this is consistent with ( pre-)transitional disks which have submm and nir scattered light cavities and mir deficits in the sed ( e.g. , lkca 15 , gm aur , sz 91 , rx j1604 - 2130 ; 2.12.3 ) .this is also consistent with the lower accretion rates measured for these objects ( 4.2 ) .one caveat is that massive planets would be the easiest to detect , yet there has not been a robust detection of a protoplanet in a ( pre-)transitional disk yet . a less massive planet ( or fewer planets ) would still lead to substantial clearing of the inner disk , but be less efficient at cutting off the inward flow of material from the outer disk ( 3.2 ) .this is consistent with those ( pre-)transitional disks that have a submm cavity but no mir deficit in the sed , such as wsb 60 , indicating that small dust still makes it through into the inner disk ( 2.12.3 ) .interestingly , wsb 60 is in oph and overall ( pre-)transitional disks seds are rare in this region . given that oph is quite young ( age myr ) , this may be indicative of the effects of dust evolution and planet growth .disks in oph could be in the initial stages of gap opening by planets whereas in older systems , the dust grains in the inner disk have grown to larger sizes and the planets have grown large enough to more efficiently disrupt the flow of material into the inner disk .the above suggests that most disks with planets go through a pre - transitional disk phase and that there are many disks with small gaps that have escaped detection with currently available techniques .more theoretical work is needed to explore if all pre - transitional disks eventually enter a transitional disk phase as planets grow more massive and the inner disk is accreted onto the star . note also that there are some disks which have submm cavities and mir sed deficits , but no evidence of clearing in nir scattered light ( e.g. , sr 21 , doar 44 , rx j1615 ) .apparently , there is some important disk physics regarding accretion efficiencies that we are missing .it is necessary to study disks in other star forming regions with alma , both to probe a range of ages and to increase the statistical sample size , in order to address questions of this kind .detections of protoplanets would be ideal to test the link between planet mass and disk structure .while much progress has been made understanding the nature of the disks described above , future work may focus more closely on disks with smaller optically thin gaps that could plausibly be created by low - mass planets .the presence of low mass orbiting companions ( ; * ? ? ?* ) is expected to alter the dust disk significantly with little impact on the gas disk , whereas high mass companions ( ) can create gaps or holes in the gas disk and possibly alter its dynamics ._ kepler _ finds that super - earth mass objects are very common in mature planetary systems ( see chapter in this volume by _ fischer et al ._ ) , so it may be that smaller mass objects form fairly commonly in disks and open holes and gaps that are not cleared of dust . these systems are more difficult to identify with sed studies and current interferometers , so potentially there are many more disks with gaps than currently known. high spatial resolution images from alma will be able to detect these small gaps , in some cases even those with sizes down to about 3 au .there are many remaining avenues that researchers can take to fully characterize clearing in disks around tts .theoretically , we can simulate the influence of various mass planets on different sized dust particles , which can be compared with observations at various wavelengths to constrain a potential planet s mass .we can use alma , vlti , and jwst to test the full extent of disk holes and gaps as well as their frequency with respect to age .we can also make substantial progress studying the gas distributions in disks in the near future with alma , particularly to determine if the structure inferred for the disk from studies of the dust is the same as that in the gas .gas tracers may also reveal the presence of an orbiting planet via emission from a circumplanetary disk .lastly , and perhaps most importantly , we need robust detections of protoplanets in disks around young stars . these future advances will help us understand how gas giant planets and terrestrial planets form out of disks and hopefully antiquate this review by the time of ppvii . +* acknowledgments . *the authors thank the anonymous referee , c. dullemond , l. hartmann , and l. ingleby for insightful comments which helped improve this review .work by c.e . was performed in part under contract with caltech funded by nasa through the sagan fellowship program executed by the nasa exoplanet science institute . z.z .acknowledges support by nasa through hubble fellowship grant hst - hf-51333.01-a awarded by the space telescope science institute , which is operated by the association of universities for research in astronomy , inc ., for nasa , under contract nas 5 - 26555 . + finally , a special recognition of the contribution of paola dalessio , who passed away in november of 2013 .she is greatly missed as a scientist , colleague , and friend . among paola smost important papers are those which addressed the issue of dust growth and mixing in t tauri disks . in , she showed that disk models with well - mixed dust with properties like that of the diffuse interstellar medium produced disks that were too vertically thick and produced too much infrared excess , while lacking sufficient mm - wave fluxes . in the second paper of the series paola and collaborators showed that models with power - law size distributions of dust with maximum sizes around 1 mm produced much better agreement with the mm and infrared emission of most t tauri disks , but failed to exhibit the 10 silicate emission feature usually seen , leading to the conclusion that the large grains must have settled closer to the midplane while leaving a population of small dust suspended in the upper disk atmosphere .this paper provided the first clear empirical evidence for the expected evolution of dust as a step in growing larger bodies in protoplanetary disks .another important result was the demonstration that once grain growth proceeds past sizes comparable to the wavelengths of observation , the spectral index of the disk emission is determined only by the size distribution of the dust , not its maximum .along with quantitative calculations of dust properties , the results imply that the typical opacities used to estimate dust masses will generally lead to underestimates of the total solid mass present , a point that is frequently forgotten or ignored .in the third paper of this series , paola and her collaborators developed models which incorporated a thin central layer of large dust along with depleted upper disk layers containing small dust .the code which developed these models has been used in over 30 papers to compare with observations , especially those from the infrared spectrograph ( irs ) as well as the irac camera on board the spitzer space telescope , and more recently from pacs on board the herschel space telescope , and from the sma .paola s models also played a crucial role in the recognition of transitional and pre - transitional t tauri disks . in many cases , particularly those of the pre - transitional disks , finding evidence for an inner hole or gap from the spectral energy distribution depends upon careful and detailed modeling .the inference of gaps and holes from the seds is now being increasingly confirmed directly by mm- and sub - mm imaging , showing reasonable agreement in most cases with the hole sizes predicted by the models .combining imaging with sed modeling in the future will place additional constraints on the properties of dust in protoplanetary disks .paola s influence in the community extended well beyond her direct contributions to the literature in over 100 refereed papers .she provided disk models for many other researchers as well as detailed dust opacities .the insight provided by her calculations informed many other investigations , as studies of x - ray heating of protoplanetary disk atmospheres , of the chemical structure of protoplanetary disks and propagation of high energy radiation and on the photoevaporation of protoplanetary disks .t. et al .( 2013 ) in : _ astronomical society of the pacific conference series _ ,476 of _ astronomical society of the pacific conference series _ , ( edited by r. kawabe , n. kuno , and s. yamamoto ) , p. 391 .
= 11pt = 0.65 in = 0.65 in
shape optimization problems arise frequently in technological processes , which are modeled in the form of partial differential equations as in . in many practical circumstances ,the shape under investigation is parameterized by finitely many parameters , which on the one hand allows the application of standard optimization approaches , but on the other hand limits the space of reachable shapes unnecessarily. shape calculus , which has been the subject of several monographs presents a way out of that dilemma .however , so far it is mainly applied in the form of gradient descent methods , which can be shown to converge .the major difference between shape optimization and the standard pde constrained optimization framework is the lack of the linear space structure in shape spaces .if one can not use a linear space structure , then the next best structure is the riemannian manifold structure as discussed for shape spaces in .the publication makes a link between shape calculus and shape manifolds and thus enables the usage of optimization techniques on manifolds in the context of shape optimization .pde constrained shape optimization however , is confronted with function spaces defined on varying domains .the current paper presents a vector bundle framework based on the riemannian framework established in , which enables the discussion of lagrange newton methods within the shape calculus framework for pde constrained shape optimization .the paper first presents the novel riemannian vector bundle framework on section [ sec2 ] , discusses this approach for a specific academic example in section [ sec3 ] and presents numerical results in section [ sec4 ] .the typical set - up of an equality constrained optimization problem is where are linear spaces and sufficiently smooth nonlinear functions . in some situationsthe constraint allows to apply the implicit function theorem in order the define a unique control to state mapping and thus the constrained problem maybe reduced to an unconstrained one of the form however , the constrained formulation is often computationally advantageous , because it allows the usage of pre existing solver technology for the constraint and it is geared towards an efficient sand ( simultaneous analysis and design ) or one shot approach based on linear kkt systems .so far , shape optimization methods based on the shape calculus , have been mainly considered with the reduced black box framework above via the implicit function theorem mainly because the set of all admissible shapes is typically not a linear space unlike the space above .the publication has developed a riemannian framework for shape optimization in the reduced unconstrained paradigm , which enables newton like iteration techniques and convergence results .this publication aims at generalizing those results to the constrained perspective in particular for the case that the constraint is of the form of a set of partial differential equations ( pde ) . within that framework ,the space for the state variable is a linear ( function ) space depending explicitly on , e.g. , , where is the interior of a shape .this line of thinking leads to vector bundles of function spaces as discussed in detail in .thus , we now consider a riemannian manifold of class ( ) , where is a smooth mapping assigning any point an inner product on the tangential bundle . for each ,there is given a hilbert space such that the set is the total space of a vector bundle .in particular , there is a bundle projection and for an open covering of a local isomorphism where is a hilbert space .in particular , we have an isomorphism on each fiber and for , the mapping is a linear isomorphism .the total space of the vector bundle is by itself a riemannian manifold , where the tangential bundle satisfies in riemannian geometry , tangential vectors are considered as first order differential operators acting on germs of scalar valued functions ( e.g. ) .such a differential operator will be notated by , if is a differentiable function and .we will have to deal with derivatives , where we will always use directional derivatives of scalar valued functions only , but notate them in the usual fashion .let the derivative of at in direction be denoted by .then , we define in this setting in particular , we denote where and , if .we consider now the following constrained optimization problem where is a bilinear form and a linear form defined on the fiber which are with respect to .the scalar valued function is assumed to be .intentionally , the weak formulation of the pde is chosen for ease of presentation .now , it will be necessary to define the lagrangian in order to formulate the adjoint and design equation to the constrained optimization problem ( [ vb - opt1][vb - opt2 ] ) .[ lagrangian ] we define the lagrangian in the setting above for as where with .let solves the optimization problem ( [ vb - opt1][vb - opt2 ] ) .then , the ( adjoint ) variational problem which we get by differentiating with respect to is given by and the design problem which we get by differentiating with respect to is given by =0\ , , \\forall w\in t_{\hat{u}}\cn\ ] ] where solves ( [ adjoint ] ) .if we differentiate with respect to , we get the state equation ( [ vb - opt2 ] ) . these ( kkt ) conditions ( [ vb - opt2][design ] )could be collected in the following condition : in a vector space setting , the existence of a solution of the ( adjoint ) variational problem ( [ adjoint ] ) is typically guaranteed by so called constraint qualifications . from this point of view , here , the existence itself can be interpreted as formulation of a constraint qualification . by using a riemannian metric on and a smoothly varying scalar product on the hilbert space , we can envision as a hilbert space with a canonical scalar product and thus also as riemannian manifold .this scalar product can be used to apply the riesz representation theorem in order to define the gradient of the lagrangian by the condition now , similar to standard nonlinear programming we can solve the problem of finding with as a means to find solutions to the optimization problem ( [ vb - opt1][vb - opt2 ] ) .the nonlinear problem ( [ newton - problem ] ) has exactly the form of the root finding problems discussed in . exploiting the riemannian structure on , we can formulate a newton iteration involving the riemannian hessian which is based on the resulting riemannian connection : this iteration will be detailed out below .however , before that , we have to specify the scalar product on the hilbert space involved .since we will use the exponential map based on the riemannian metric on , we would like to choose a metric that is in the hilbert space parts as simple as possible .therefore we use the metric defined on the hilbert space and transfer that canonically to the hilbert spaces .thus , we assume now that in the sequel we only have to deal with one particular chart from the covering and define there now , geodesics in the hilbert space parts of are represented just by straight lines in and the exponential map can be expressed in the form where denotes the exponential map on the manifold . within iteration ( [ newton ] ) ,the hessian has to be discussed .it is based on the riemannian connection on at .the expression may denote the riemannian covariant derivative on .since the scalar product in is completely independent from the location , we observe that mixed covariant derivatives of vectors from with respect to tangential vectors in are reduced to simple directional derivatives which is the case for derivatives in linear spaces anyway .thus : +\frac{\partial}{\partial u}z[h_u]+\frac{\partial}{\partial p}z[h_p]\\ \frac{\partial}{\partial y}w[h_y]+\nabla^{\cn}_{u}w[h_u]+\frac{\partial}{\partial p}w[h_p]\\ \frac{\partial}{\partial y}q[h_y]+\frac{\partial}{\partial u}q[h_u]+\frac{\partial}{\partial p}q[h_p ] \end{array}\right ) } } \end{array}\ ] ] from the definition of the hessian as :=\nabla_h\grad{{\mathscr{l}}} ] is illustrated .now , we consider dependent on .therefore , we denote it by .[ fig_omega ] interface ( 70,80) ( 10,80) ( 43,13) the manifold ,{{\mathbbm{r}}}^2) ] is represented by a curve \to{{\mathbbm{r}}}^2\text { , } \theta\mapsto c(\theta) ] , the tangent space is isomorphic to the set of all smooth vector fields along , i.e. , ,{{\mathbbm{r}}}^2)\cong\{h\ , |\,h=\alpha n , \alpha\in c^\infty \left([0,1],{{\mathbbm{r}}}\right)\}\ ] ] where n is the unit outer normal to at .thus , all considerations of carry easily over to our manifold ,{{\mathbbm{r}}}^2) ] and {.1mm}{4mm}_{\hspace{.6mm}\omega_2} ] is linear and continuous .the perturbed boundaries in ( [ def_shape_der ] ) are defined by where denotes the perturbation of identity and ] can be expressed as an integral over the domain as well as an integral over the interface which is better suited for a finite element implementation as already mentioned for example in ( * ? ? ?* remark 2.3 , p. 531 ) . an important point to note hereis that the shape derivative of our evaluated in its saddle point is equal to the one of due to the theorem of correa and seeger ( * ? ? ?* theorem 2.1 ) .such a saddle point is given by which leads to the adjoint equation and to the state equation like in we first deduce a representation of the shape derivative expressed as a domain integral which will later allow us to calculate the boundary expression of the shape derivative by means of integration by parts on the interface .one should note however , that by the hadamard structure theorem ( * ? ? ? * theorem 2.27 ) only the normal part of the continuous vector field has an impact on its value . applying the following common rule for differentiating domain integrals {.1mm}{9mm}_{\hspace{1mm}t=0}=\int_{\omega}\left(d_m\eta+\mathrm{div}(v)\eta\right)\ ] ] which was proved in (* lemma 3.3 ) yields & = \lim\limits_{t\to 0^+}\frac{{{\mathscr{l}}}(y , u_t , p)-{{\mathscr{l}}}(y , u , p)}{t}=\frac{d^+}{dt}{{\mathscr{l}}}(y , u_t , p)\,\rule[-3mm]{.1mm}{6mm}_{\hspace{.5mm}t=0}\\ & = \int_{\omega(u)}d_m\left(\frac{1}{2}(y-\overline{y})^2\right)+d_m\left(\nabla y^t\nabla p\right)-d_m(fp)\\ & \hspace*{13mm}+\mathrm{div}(v)\left(\frac{1}{2}(y-\overline{y})^2+\nabla y^t\nabla p - fp\right)\hspace{.7mm}dx\\ & \hspace*{4mm}-\int_u d_m\left ( \left\llbracket \frac{\partialy}{\partial n } p\right\rrbracket\right ) + \mathrm{div}_u(v)\left\llbracket \frac{\partialy}{\partial n } p\right\rrbracket\hspace{.7mm}ds \end{split}\ ] ] where denotes the material derivative with respect to which is defined by {.1mm}{6mm}_{\hspace{1mm}t=0}\ ] ] for a generic function . for the material derivative the product rule holds .moreover , the following equality was proved in combining ( [ shape_der_1 ] ) , the product rule and ( [ material_grad ] ) we obtain & = \int_{\omega(u)}(y-\overline{y})d_m\left(y\right)+\nabla\hspace{-.5 mm } \left(d_m(y)\right)^t\nabla p+\nabla y^t\nabla\hspace{-.5 mm } \left(d_m\left(p\right)\right)\\ & \hspace*{13mm}-\nabla y^t \left(\nabla v + \nabla v^t\right)\nabla p - d_m(f)p - fd_m(p)\\ & \hspace*{13mm}+\mathrm{div}(v)\left(\frac{1}{2}(y-\overline{y})^2+\nabla y^t\nabla p - fp\right)\hspace{.7mm}dx\\ & \hspace*{4mm}-\int_u \left\llbracket d_m\left(\frac{\partial y}{\partial n}\right ) p+\frac{\partial y}{\partial n}d_m(p)\right\rrbracket + \mathrm{div}_u(v)\left\llbracket \frac{\partial y}{\partialn } p\right\rrbracket\hspace{.7mm}ds . \end{split}\ ] ] from this we get & = \int_{\omega(u)}\left((y-\overline{y})-\triangle p\right)d_m\left(y\right)+\left(-\triangle y - f\right)d_m\left(p\right)\\ & \hspace*{13mm}-\nabla y^t \left(\nabla v + \nabla v^t\right)\nabla p - d_m(f)p\\ & \hspace*{13mm}+\mathrm{div}(v)\left(\frac{1}{2}(y-\overline{y})^2+\nabla y^t\nabla p - fp\right)\hspace{.7mm}dx\\ & \hspace*{4mm}+\int_u \left\llbracket\frac{\partial p}{\partial n}d_m(y ) -d_m\left(\frac{\partial y}{\partial n}\right ) p\right\rrbracket + \mathrm{div}_u(v)\left\llbracket \frac{\partial y}{\partial n } p\right\rrbracket\hspace{.7mm}ds . \end{split}\ ] ] to deal with the term , we note that the shape derivative of a generic function with respect to the vector field is given by :=d_mj - v^tj.\ ] ] therefore is equal to in the both subdomains , . due tothe continuity of the state and of the flux ( [ n ] ) their material derivative is continuous .thus , we get that follows from ( [ n ] ) , ( [ adjoint4 ] ) and the identity which implies by combining ( [ adjoint1 ] ) , ( [ designe ] ) and ( [ shape_der_3][pbe ] ) , we obtain =\int_{\omega(u)}&-\nabla y^t \left(\nabla v + \nabla v^t\right)\nabla p - pv^t\nabla f\\ & + \mathrm{div}(v)\left(\frac{1}{2}(y-\overline{y})^2+\nabla y^t\nabla p - fp\right)\hspace{.7mm}dx \end{split } } \ ] ] i.e. , the shape derivative of expressed as domain integral which is equal to the one of due to the theorem of correa and seeger .now , we convert this domain integral into a boundary integral as mentioned above .integration by parts in ( [ boundary_expression ] ) yields since the outer boundary is not variable , we can choose the deformation vector field equals zero in small neighbourhoods of .therefore , the outer integral in ( [ int_by_parts1 ] ) disappears .combining ( [ boundary_expression ] ) , ( [ int_by_parts1 ] ) and the vector calculus identity which was proved in gives =&\int_{\omega(u)}-\nabla p^t\nabla\left(v^t\nabla y\right)-\nabla y^t \nabla\left(v^t\nabla p\right)\\ & \hspace*{9mm}-(y-\overline{y})v^t\nabla y+fv^t\nabla p \hspace{.7mm}dx\\ & + \int_{u } \left\llbracket\left(\frac{1}{2}(y-\overline{y})^2+\nablay^t\nabla p - fp\right)\left < v , n\right>\right\rrbracket ds . \label{sg1 } \end{split}\ ] ] then , applying integration by parts in ( [ sg1 ] ) we get and analogously like in ( [ int_by_parts1 ] ) the outer integral in ( [ int_by_parts2 ] ) as well as in ( [ int_by_parts3 ] ) vanishes due to the fixed outer boundary .thus , it follows that =&\int_{\omega(u)}v^t\nabla p\left(\triangle y+f\right)+v^t\nabla y\left(\triangle p -(y-\overline{y})\right)\hspace{.7mm}dx\\ & + \int_u \left\llbracket\left(\frac{1}{2}(y-\overline{y})^2+\nabla y^t\nabla p - fp\right)\left < v , n\right>\right\rrbracket\\ & \hspace*{9mm}-\left\llbracket \frac{\partial y}{\partial n}v^t\nabla p \hspace{.3mm}\right\rrbracket-\left\llbracket \frac{\partial p}{\partial n}v^t\nabla y \hspace{.3mm}\right\rrbracket\hspace{.7mm}ds.\label{sg2 } \end{split}\ ] ] the domain integral in ( [ sg2 ] ) vanishes due to ( [ adjoint1 ] ) and ( [ designe ] ) .moreover , the term disappears because of ( [ n ] ) and the term because of the continuity of and .that follows from ( [ n ] ) , ( [ adjoint3 ] ) and ( [ bracket0 ] ) .thus , we obtain the shape derivative of expressed as interface integral : =-\int_u \left\llbracket f\right\rrbracket p\left < v , n\right > ds } \ ] ] now , we consider the objective in ( [ oc1 ] ) with perimeter regularization . combining ( [ shape_der_interface ] ) with proposition 5.1 in we get =\int_u \left(-\left\llbracket f\right\rrbracket p+\mu\kappa\right)\left < v , n\right > ds } \ ] ] where denotes the curvature corresponding to the normal .note that ( [ shape_der_interface ] ) is equal to ] with perimeter regularization due to the theorem of correa and seeger as mentioned above .we focus now on the weak formulation ( [ weak - kkt.1]-[weak - kkt.3 ] ) and observe the following for the right hand sides in the case of ( [ oc1][oc3 ] ) : {\bar{w}}=&\int_u \left(\left\llbracket f\right\rrbracket p-\mu\kappa\right)\left<{\bar{w}},n\right > ds\label{no2}\\ -a_{u}(y,{\bar{q}})+b_{u}({\bar{q } } ) = & \int_{\omega(u)}-\nabla y^t\nabla { \bar{q}}+f{\bar{q}}\hspace{.7mm}dx\label{diri.state}\end{aligned}\ ] ] these expressions are set to zero , in order to define the necessary conditions of optimality .now , we discuss more details about the hessian operators in the left hand sides of ( [ weak - kkt.1][weak - kkt.3 ] ) .we first consider them without the term which requires special care .these are at the solution of the optimization problem ( [ oc1][oc3 ] ) for all as follows : w=0\\ h_{13}(q,{\bar{z}})&=a_u({\bar{z}},q)=\int_{\omega(u)}\nabla { \bar{z}}^t \nabla q dx\\ h_{21}(z,{\bar{w}})&=\frac{\partial}{\partial y}\frac{\partial}{\partial u}(\left[j(y , u)+a_{u}(y , p)\right]{\bar{w}})z=0\\ h_{23}(q,{\bar{w}})&=\frac{\partial}{\partial u}\left[a_{u}(y , q)-b_{u}(q)\right]{\bar{w}}=-\int_u\left\llbracket f\right\rrbracket q\left<{\bar{w}},n\right > ds\\h_{31}(z,{\bar{q}})&=a_{u}(z,{\bar{q}})=\int_{\omega(u)}\nabla z^t \nabla { \bar{q}}dx\\ h_{32}(w,{\bar{q}})&=\frac{\partial}{\partial u}\left[a_{u}(y,{\bar{q}})-b_{u}({\bar{q}})\right]w=-\int_u\left\llbracket f\right\rrbracket { \bar{q}}\left < w , n\right > ds\\ h_{33}(q,{\bar{q}})&=0\end{aligned}\ ] ] we compute now the term . it will be evaluated at the solution of the optimization problem which means that it consists only of the second shape derivative . in section [ sec4 ] this solution will be a straight line connection of the points and , i.e. , the curvature is equal to zero . combining proposition 5.1 in with the following rule for differentiating boundary integrals {.1mm}{9mm}_{\hspace{1mm}t=0}=\int_{\gamma } \left(d\eta[v]+\left(\frac{\partial\eta}{\partial n}+\eta\kappa\right)\left < v , n\right>\right)\ ] ] which was proved in yields \left < w , n\right>-\left\llbracket f\right\rrbracket\left(\kappa p+\frac{\partial p}{\partial n}\right)\left<{\bar{w}},n\right>\left< w , n\right>\\ & \hspace*{8mm}+\mu\frac{\partial w}{\partial \tau}\frac{\partial \bar{w}}{\partial \tau}\left<{\bar{w}},n\right>\left < w , n\right > ds \end{split}\ ] ] where denotes the derivative tangential to .we have to evaluate the shape derivative $ ] in ( [ h22_1 ] ) .we observe in our special case because of the necessary optimality condition ( [ no2 ] ) .thus , it holds that =-{\bar{w}}^t\nabla p=-{\bar{w}}^t\frac{\partial p}{\partial n}n\quad\text{on } u \ ] ] due to ( [ shape_material_der ] ) . applying the product rule for shape derivatives yields & = \left\llbracket df[{\bar{w}}]\hspace{.7mm}p\right\rrbracket + \left\llbracket f dp[{\bar{w}}]\right\rrbracket\stackrel{(\ref{n})}=\left\llbracket df[{\bar{w}}]\right\rrbracket p+\left\llbracket f \right\rrbracket dp[{\bar{w}}]\\ & \hspace*{-1.8mm}\underset{(\ref{dp})}{\stackrel{(\ref{p0})}=}-\left\llbracket f \right\rrbracket \frac{\partial p}{\partial n}\left<{\bar{w}},n\right>\quad\text{on } u. \end{split}\ ] ] thus , the hessian operator reduces to by using the expressions above , we can formulate the qp ( [ qp.1 ] , [ qp.2 ] ) at the solution in the following form : where the objective function is given by this qp in weak formulation can be rewritten in the more intelligible strong form of an optimal control problem : the adjoint problem to this optimal control problem is the boundary value problem : the resulting design equation for the optimal control problem ( [ qps.1][qps.6 ] ) is this section , we use the qp ( [ qp.p1 ] , [ qp.p2 ] ) away from the optimal solution as a means to determine the step in the shape normal direction and thus create an iterative solution technique very similar to sqp techniques known from linear spaces .we solve the optimal control problem ( [ qps.1][qps.6 ] ) by employing a cg iteration for the reduced problem ( [ oc.design ] ) .i.e. , we iterate over the variable and each time the cg iteration needs a residual of equation ( [ oc.design ] ) from , we compute the state variable from ( [ qps.2][qps.6 ] ) and then the adjoint variable from ( [ ocad.1 ] , [ ocad.2 ] ) , which enables the evaluation of the residual the particular values for the parameters are chosen as , and .the data are generated from a solution of the state equation ( [ oc2 ] , [ oc2 ] ) with being the straight line connection of the points and .the starting point of our iterations is described by a b spline defined by the two control points and .we build a coarse unstructured tetrahedral grid with roughly 6000 triangles as shown in the leftmost picture of figure [ fig - iterations ] .we also perform computations on uniformly refined grids with roughly 24000 and 98000 triangles . in figure [ fig - iterations ] are also shown the next two iterations on the coarsest grid , where table [ fig - iterations ] gives the distances of each shape to the solution approximated by where denotes the solution shape and is the first unit vector .similar to , the retraction chosen for the shape is just the addition of the to the current shape . in each iteration , the volume mesh is deformed according to the elasticity equation .table demonstrates that indeed quadratic convergence can be observed on the finest mesh , but also that the mesh resolution has a strong influence on the convergence properties revealed .the major advantage of the newton method over a standard shape calculus steepest method based on the ( reduced ) shape derivative =-\int_u \left(\left\llbracket f\right\rrbracket p-\mu\kappa\right)\langle v , n\rangle ds\ ] ] is the natural scaling of the step , which is just 1 near to the solution .when first experimenting with a steepest descent method , we found by trial and error , that one needs a scaling around 10 000 in order to obtain sufficient progress .[ fig - iterations ] ( 12,4 ) ( 0,0),title="fig : " ] ( 4.4,0),title="fig : " ] ( 8.8,0),title="fig : " ] .performance of shape lagrange newton algorithms : distances from the optimal solution on meshes with varying refinement .quadratic convergence on the finest grid can be observed .[ cols=">,^,^,^",options="header " , ]this paper presents a generalization of the riemannian shape calculus framework in to lagrange newton approaches for pde constrained shape optimization problems .it is based on the idea that riemannian shape hessians do not differ from classical shape hessians in the solution of a shape optimization problem and that newton methods still converge locally quadratically , if hessian terms are neglected which are zero at the solution anyway .it is shown that this approach is viable and leads to computational methods with superior convergence properties , when compared to only linearly converging standard steepest descent methods .nevertheless , several issues have to be addressed in future investigations , like : * more refined retractions have to be developed for large shape deformations . *as observed during the computations , the shape deformation sometimes leads to shapes , where normal vectors can no longer be reliably evaluated .provisions for those cases have be developed * full lagrange newton methods may turn out being not very computationally efficient . however , this paper lays the foundation for the construction of appropriate preconditoners for the reduced optimization problem in many practical cases . * the riemannian shape space properties including quadratic convergence of the lagrange newton approach seem to materialize only on very fine grids .a logical next development is then to use locally adapted meshes near the shape front to be optimized .this research has been partly funded by the dfg within the collaborative project exasolvers as part of the dfg priority program spp 1648 sppexa .furthermore , the authors are very grateful to an unknown referee for insightful comments , which helped significantly to improve the paper from an earlier version . e. arian and v. n. vatsa . a preconditioning method for shape optimization governed by the euler equations .technical report 9814 , institute for computer applications in science and engineering ( icase ) , 1998 .m. berggren .a unified discrete continuous sensitivity analysis method for shape optimization . in w. fitzgibbon et al . ,editor , _ applied and numerical partial differential equations _ ,volume 15 of _ computational methods in applied sciences _ ,pages 2539 .springer , 2010 .n. gauger , c. ilic , s. schmidt , and v. h. schulz .non parametric aerodynamic shape optimization . in g.leugering , s. engell , a. griewank , m. hinzea , r. rannacher , v. h. schulz , m. ulbrich , and s. ulbrich , editors , _ constrained optimization and optimal control for partial differential equations _ ,volume 160 , pages 289300 .birkhuser , basel , boston , berlin , 2012 .
the novel riemannian view on shape optimization developed in is extended to a lagrange newton approach for pde constrained shape optimization problems . the extension is based on optimization on riemannian vector space bundles and exemplified for a simple numerical example .
given two sources and separately observed by two transmitters , we consider the problem of finding the minimum number of bits that needs to be sent by each transmitter to a common receiver who has access to side information and wants to compute a given function with high probability , _i.e. _ , with asymptotic zero error probability. the first result on this problem was obtained by krner and marton who derived the rate region for the case where is the sum modulo two of binary and and where is symmetric ( no side information is available at the receiver ) .interestingly , this result came before orlitsky and roche s general result for the single source case , which provides a closed form expression on the minimum number of bits needed to be transmitted to compute at the receiver , for arbitrary and .round communication in a point - to - point channel .also , coding schemes and converses established in have been used in other network configurations , such as cascade networks , .] however , krner and marton s arguments appear to be difficult to generalize to other functions and probability distributions ( for an extension of to sum modulo and symmetric distributions see ) .ahlswede and han proposed an achievable scheme for the sum modulo two problem with an arbitrary probability distribution which is a combination of the krner - marton and slepian - wolf schemes .the obtained rate region includes , and sometimes strictly , the convex hull of the two schemes .the same scheme has been used in to derive an achievable rate region for a certain class of polynomial functions which is larger than the slepian - wolf rate region .also , krner - marton s structural coding scheme has been used to obtain the rate region for certain instances of the problem where the receiver wants to compute some subspace generated by the sources . except for some specific linear functions and probability distributions ,the problem of finding a closed - form expression for the rate region of arbitrary functions and distributions remains in general open .non closed - form results have been obtained for general functions and distributions by doshi , shah , and mdard who derived conditions under which a rate pair can be achieved for fixed code length and error probability . a variation of this problem where one of the transmitters observes what the other transmitter sends has been investigated by ericson and krner .because of cooperation , the rate region of this problem includes the rate region of the problem considered in this paper .a more general communication setting has been investigated by nazer and gastpar , who considered the problem of function computation over a multiple access channel , thereby introducing potential interference between transmitters . in our problem, we characterize the rate region for a specific function and specific probability distribution .a slightly different problem for the same setting has been considered by han and kobayashi .there , they derived necessary and sufficient conditions for a function , such that for any probability distribution , the rate region of the problem becomes the same as slepian - wolf rate region .finally , function computation has also been studied in more general networks , such as in the context of network coding and decentralized decision making and computation .in this paper we first provide a general inner bound to the rate region of the function computation problem .then , we establish an outer bound using results from rate distortion for correlated sources . while this boundis not explicit in general , it implies an explicit outer bound .this latter outer bound and the inner bound are tight for the case where sources are independent given the side information . as a corollary, we recover the rate region for a single source .finally , we show that the inner bound characterizes the rate region for partially invertible functions , _i.e. _ , when or is a function of both and . as a corollary, we recover the slepian - wolf rate region which corresponds to the case where . for a single source and side information , the minimum number of bits needed for computing a function is the solution of an optimization problem defined over the set of all independent sets with respect to a characteristic graph defined by , , and .indeed , orlitsky and roche showed that , for a single source , allowing for multisets of independent sets does nt yield any improvement on achievable rates ( see proof of ( * ? ? ?* theorem ) ) .by contrast , for multiple sources multisets may indeed increase the set of achievable rate pairs as we show in an example .an outline of the paper is as follows . in section [ sec : probstate ]we formally state the problem and provide some background material and definitions .section [ sec : mainresults ] contains our results , and section [ sec : analysis ] is devoted to the proofs .( -1500,-1150 ) ( 0,0) ( 0,-1000) ( 980,-500) ( 100,-50)(900,-450)(560,-160) ( 100,-950)(900,-550)(550,-850) ( 1480,-500) [ fig : setting ] let , , , and be finite sets , and .let , be independent instances of random variables taking values over and distributed according to .an code consists of two encoding functions and a decoding function the error probability of a code is defined as where and a rate pair is achievable if , for any and all large enough , there exists an code whose error probability is no larger than .the rate region is the closure of the set of achievable rate pairs .the problem we consider in this paper is to characterize the rate region for given and .we recall the definition of conditional characteristic graph which plays a key role in coding for computing .[ def : condchargraph ] given and , the conditional characteristic graph of given is the ( undirected ) graph whose vertex set is and whose edge set to denote the edge set of a graph . ] is defined as follows .two vertices and are connected whenever there exists such that * , * .given two random variables and , where ranges over and over _ subsets _ of , is a subset of .an example of a sample of is , where .] we write whenever .independent sets of a conditional characteristic graph with respect to two random variables and and a function turns out to be elemental in coding for computing .in fact , given , the knowledge of an independent set of that includes the realization suffices to compute .the set of independent sets of a graph and the set of maximal independent sets of are denoted by and , respectively . given a finite set , we use to denote the collection of all multisets of .is a collection of elements from possibly with repetitions , _e.g. _ , if , then is a multiset . ][ conditional graph entropy ] [ def : condgraphent ] the conditional entropy of a graph is defined as whenever random variables form a markov chain . ] we now extend the definition of conditional characteristic graph to allow conditioning on variables that take values over independent sets .[ def : gencondgraph ] given and such that , . ]define for , , , and .the generalized conditional characteristic graph of given and , denoted by , is the conditional characteristic graph of given with respect to the marginal distribution and .[ fig : examplecond ] [ ex : bigger ] let and be random variables defined over the alphabets and , respectively , with further , suppose that and that take on values uniformly over the pairs with .the receiver wants to decide whether or , _ i.e. _ , it wants to compute fig .[ fig : cond ] depicts which is equal to by symmetry .hence we have and an example of a random variable that satisfies is one whose support set is for such a , the generalized conditional characteristic graph is depicted in fig . [fig : gencond ] and we have another that satisfies is one whose support set is for such a , the generalized conditional characteristic graph is depicted in fig . [fig : gencond2 ] and we have note that whenever the following lemma , proved in section [ sec : analysis ] , provides sufficient conditions under which [ lem : samecondgraph ] given and , we have for all such that in each of the following cases : * for all ; * is a complete graph or , equivalently , consists only of singletons ; * and are independent given .notice that if is a complete graph , by knowing and the function can be computed only if also is known exactly .our results are often stated in terms of certain random variables and which can usefully be interpreted as the messages sent by transmitter - x and transmitter - y , respectively .this interpretation is consistent with the proofs of the results .theorem [ achiev ] provides a general inner bound to the rate region : [ achiev ] is achievable whenever for some and that satisfy the markov chain constraints and either or , equivalently , moreover , we have the following cardinality bounds on the range of and : is a constant , the two markov chain constraints are equivalent to the single long markov chain which imply that the above sum rate inequality becomes the last part of the theorem is immediate .note that in the above theorem , and are not restricted to take values over maximal independent sets .by contrast with the single source case where the restriction to maximal independent induces no loss of optimality see definition [ def : condgraphent ] where may be restricted to range over two sources the restriction to maximal independent sets may indeed induce a loss of optimality .this will be illustrated in example [ ex : partiallyinv ] of section [ rregions ] which considers a setting where theorem [ achiev ] is tight and characterizes the rate region .theorem [ achiev ] does not , in general , give the rate region .an example of this is the sum modulo of binary and ( no side information ) with symmetric distribution as considered by krner and marton : [ exkm ] let be the sum modulo of binary and with joint distribution .\ ] ] assuming , and both consists of singletons .this implies that the achievable region given by theorem [ achiev ] reduces to since for all that satisfy note that since ( which is equal to according to claim a. of lemma [ lem : samecondgraph ] ) consists of singletons , we have furthermore , because of the markov chain constraint we have by the data processing inequality .hence , and yield and , from the same argument we get inequalities thus become which corresponds to the slepian - wolf rate region . this region is nt maximal since the maximal rate region is given by the set of rate pairs that satisfy the only two constraints as shown by krner and marton .we now provide a rate region outer bound which is derived using results from rate distortion for correlated sources : [ th : outerbound1 ] if is achievable , then for some random variables that satisfy and markov chain constraints from lemma [ lem : jointcompressionlong ] in the appendix a , one can show that had the above first two markov chain constraints been the outer bound given by theorem [ th : outerbound1 ] would be equal to the inner bound given by theorem [ achiev ] . however , these hypothetical markov chains do nt hold in general since the inner bound is not tight in general as we show in example [ exkm ] . although theorem [ th : outerbound1 ] does nt provide an explicit outer bound it is implicitly characterized by the random variables that should ( in part ) satisfy theorem implies the following explicit outer bound which can alternatively be derived from ( * ? ? ?* theorem ) : [ th : outerbound2 ] if is achievable then the inner and outer bounds given by theorem [ achiev ] and corollary [ th : outerbound2 ] are tight for independent sources , hence also for the single source computation problem for which we recover ( * ? ? ?* theorem ) .when the sources are conditionally independent given the side information , the rate region is the solution of two separate point - to - point problems .this is analogous to a result of gastpar which says that under the independence condition the rate - distortion region for correlated sources is the solution of two separate point - to - point wyner - ziv problems . [ rate region - independent sources][cor : independent ] if and are independent given , the rate region is the closure of rate pairs such that hence , if is constant , is achievable if and only if . [ ex : single ] let , let and be independent uniform random variables over and , respectively , and let and .the receiver wants to compute the function defined as since and are independent given , the rate region is given by theorem [ cor : independent ] .it can be checked that and a numerical evaluation of conditional graph entropy gives hence the rate region is given by the set of rate pairs satisfying the following theorem gives the rate region when the function is partially invertible with respect to ( with respect to , respectively ) , _ i.e. _ , when ( , respectively ) is a deterministic function of both and .is partially invertible if . ][ capacityonecomplete ] if is partially invertible with respect to , then the rate region is the closure of rate pairs such that for some that satisfies with the following cardinality bound when is invertible , is a function of both and , and theorem [ capacityonecomplete ] reduces to the slepian - wolf rate region .[ cor : invertible ] if is invertible , then the rate region is the closure of rate pairs such that the above result is trivial , since when the function is invertible , recovering the function is equivalent to recovering the sources . in the appendix, it has been shown that this result could be derived from theorem [ capacityonecomplete ] .let , , let be such that for any with , and let since is invertible , the rate region is the slepian - wolf rate region given by corollary [ cor : invertible ] .[ ex : partiallyinv ] example of a rate region for a partially invertible function.,width=377 ] consider the situation with no side information given by , with , and .\ ] ] since is partially invertible with respect to , we can use theorem [ capacityonecomplete ] to numerically evaluate the rate region .the obtained region is given by the union of the three shaded areas in fig .[ fig : achiev ] . these areasare discussed later , after example [ ex : single ] .to numerically evaluate the rate region , we would need to consider the set of all conditional distributions , , . since , consists of multisets of whose cardinalities are bounded by . however , as we now show , among all possible multisets with cardinality at most , considering just the multiset gives the rate region .consider a multiset with cardinality at most .* if the multiset does not contain , then the condition , hence the condition , can not be satisfied .therefore this multiset is not admissible , and we can ignore it . * if the multiset contains two samples and with conditional probabilities and , respectively , replacing them by one sample whose conditional probability is , gives the same terms and , hence the same rate pairs .therefore , without loss of optimality we can consider only multisets which contain a unique sample of . *if the multiset contains a sample with arbitrary conditional probability , replacing it with sample whose conditional probabilities are and gives the same rate pairs .( the same argument holds for a sample ) . + from , , and , multisets with one sample of and multiple copies of gives the rate region . *if the multiset has cardinality , adding samples with zero conditional probabilities , gives the same rate pairs .it follows that the rate region can be obtained by considering the unique multiset and by optimizing over the conditional probabilities that satisfy notice that this optimization has only four degrees of freedom .[ fig : achiev ] shows the achievable rate region in theorem [ achiev ] when restricting and to be over maximally independent sets ( gray area ) , all independent sets ( gray and light gray areas ) , and multisets of independent sets ( union of gray , light gray , and black areas ) .the latter area corresponds to the rate region by theorem [ capacityonecomplete ] . denoting these areas by , , and , .] respectively , we thus numerically get the strict sets inclusions larger independent sets for allow to reduce . however , such sets may have less correlation with and , and so may require to increase .by contrast , for the single source case , since only needs to be minimized it is optimal to choose maximal independent sets . but here choosing a bigger independent set may be non - optimal , and we have a trade - off between and .moreover , this holds also for two same independent sets with different labelings(different conditional probabilities ) , _ i.e. _ replacing two independent sets by one of them may lead into greater ( or smaller ) and smaller ( or greater ) , which means that it may be non - optimal to replace a multiset by a set .numerical evidence suggests that the small difference between and is unrelated to the specificity of the probability distribution in the example ( _ i.e. _ , by choosing other distributions the difference between and remains small ) .[ proof of lemma [ lem : samecondgraph ] ] suppose .for all claims , we show that , _i.e. _ , if two nodes are connected in , then they are also connected in .the opposite direction , , follows from the definition of generalized conditional characteristic graph .suppose nodes and are connected in .this means that there exist , and such that and if , then and are also connected in according to the definition of conditional characteristic graph .we now assume and prove claims a. , b. , and c. * since all probabilities are positive we have , hence and yields which implies that and are also connected in .* consists of singletons , so yields , and thus and are also connected in as we showed above .* from the independence of and given we have hence , since we have _ i.e. , _the rest of the proof is the same as claim a .. we consider a coding scheme similar to the berger - tung rate distortion coding scheme with the only difference that here we use jointly robust typicality in place of strong typicality . recall that are jointly -robust typical , if for all , where note that if are jointly robust typical , then , and if takes values over subsets of , it means that .this fact ensures the asymptotic zero block error probability in decoding the random variables and .moreover , having decoded them reliably and since and satisfy markov chains , asymptotic zero block error probability in decoding the function follows .note that rate distortion achievability results do not , in general , provide a direct way for establishing achievability results for coding for computing problems .indeed , for rate distortion problems one usually considers average distortion between the source and the reconstruction block whereas in computation problems one usually considers the more stringent block distortion criterion . for and ,define to be equal to for and such that ( notice that all such gives the same . ) .further , for and let randomly generate independent sequences in an i.i.d .manner according to the marginal distribution , and randomly and uniformly bin these sequences into bins .similarly , randomly generate independent sequences in an i.i.d .manner according to , and randomly and uniformly bin them into bins .reveal the bin assignments and to the encoders and to the decoder ._ encoding : _ the -transmitter finds a sequence that is jointly robust typical with source sequence , and sends the index of the bin that contains , _ i.e. _ , .the -transmitter proceeds similarly sends .if a transmitter does nt find such an index it declares an errors , and if there are more than one indices , the transmitter selects one of them randomly and uniformly ._ decoding : _ given and the index pair , declare if there exists a unique jointly robust typical such that and , and such that is defined . otherwise declare an error ._ probability of error : _ there are two types of error .the first type of error occurs when no s , respectively s , is jointly robust typical with , respectively with .the probability of each of these two errors is shown to be negligible in for large enough .hence , the probability of the first type of error can be made arbitrary small by taking large enough .the second type of error refers to the slepian - wolf coding procedure . by symmetry of the encoding and decoding procedures ,the probability of error of the slepian - wolf coding procedure , averaged over sources outcomes , over s and s , and over the binning assignments , is the same as the average error probability conditioned on the transmitters selecting and .note that whenever , there is no error , _i.e. _ , by definition of robust typicality and by the definitions of and .we now compute the probability of the event . where denotes the ( - ) jointly robust typical set with respect to distribution .the probability of the second type of error is upper bounded as according to the properties of jointly robust typical sequences , we have for any and large enough . hence , from and the error probability of the second typeis thus negligible whenever where and follow from the markov chains and , respectively , and where follows from the markov chains and . for equivalence , we prove one direction , the proof for the other direction is analogues .suppose that holds , _i.e. _ to prove that , we show that for any , , , and such that we have since , there exists such that , hence , by the definition of generalized conditional characteristic graph , we have to prove that , note that for any , , , , and such that , random variables as in the theorem have joint distribution of the form in the following , we show that it is possible to replace and with and such that let be a connected compact subset of probability mass functions on .let and consider the following continuous functions on : the first functions are trivially continuous with respect to .functions and are continuous with respect to due to the continuity of , , and with respect to since and now , due to support lemma ( * ? ? ?* appendix c , page 631 ) , there exists with and a collection of conditional probability mass functions , indexed by such that for , hence , by we have moreover due to and the markov chain remains unchanged if we change to , hence the joint probability and the related quantities , , and are preserved .this implies that the rate region obtained by changing to remains unchanged .hence , assuming the same distortion for both sources , is the rate distortion region for correlated source and with distortion criteria and , respectively . ]according to ( * ? ? ?* theorem 5.1 ) there exist some random variables and and a function such that and to show the first inequality of the corollary , note that if is an achievable rate pair for , then is an achievable rate pair for , _ i.e. _ , for the setting where is revealed to the receiver .the first inequality of theorem [ th : outerbound1 ] for becomes for some and that satisfy therefore , where holds due to . for the third inequality note that , hence the markov chain and lemma [ lem : jointcompressionv2y ] gives this together with definition [ def : condgraphent ] and the third inequality of theorem [ th : outerbound1 ] gives the desired result .for achievability , suppose and satisfy the conditions of theorem [ achiev ] , _ i.e. _ , and holds .from two markov chains and the fact that and are independent given , we deduce the long markov chain it then follows that and using theorem [ achiev ] , we deduce that the rate pair given by and is achievable .now , since and are independent given , by claim c. of lemma [ lem : samecondgraph ] .this allows to minimize the above two mutual information terms separately , which shows that the rate pair is achievable ( notice that is a function of the joint distribution only , thus the minimization constraint reduces to .a similar comment applies to the minimization of . )the result then follows from definition [ def : condgraphent ] .[ support set of a random variable ] [ def : supportset ] let where is a random variable taking values in some countable set . define the random variable as where is a random variable taking values in the positive integers , where is a random variable that takes values over the subsets of , and such that if and only if and .note that and are in one - to - one correspondance by definition . in the sequel , with a slight abuse of notationwe write whenever is a random variable that takes values over the subsets of and such that whenever the second index of is .e ._ , whenever for some .since the rate pair is achievable , there exist a decoding function and are received messages from transmitters . ] such that also , since is partially invertible with respect to , _ i.e _, is a function of and , there exist a function such that define the distortion measures and since we have from and , as .with the same argument one shows that according to ( * ? ? ?* theorem 1 ) and its immediate extension to the case where there is side information at the receiver , it follows that there exists a random variable and two functions and such that and hamming distortion for a function defined over both sources and side information .however , it is straightforward to extend the converse of ( * ? ? ?* theorem 1 ) to handle this setting ( same as which shows that wyner and ziv s result can be extended to the case where the distortion measure is defined over a function of the source and the side information . ) . ] and notice that since the distortion is equal to zero , for any and that satisfy we should have this , according to lemma [ lem : jointdef ] in the appendix , is equivalent to _ remark : _ note that due to definition [ def : supportset ] , takes different values for even if , _i.e. _ , and with the same index but different indices .this is unlike the converse for the point - to - point case ( * ? ? ?* proof of theorem 2 ) , where such a and are considered as one sample . by considering them as one sample we always have but we have which means that holds but may not hold . this is why the reduction to sets ( and so to maximal independent sets ) are possible in point - to - point case but it may not be possible for correlated sources case .[ lem : jointcompressionlong ] given and , random variables satisfy and the markov chains if and only if they satisfy p. for the definition of the support set of a random variable . ] and * satisfy if and only if for all and such that it holds that * if and only if for all and that it holds that * notice that due to the markov chains we can write hence if and only if which is equivalent to the conditions + using the lemma [ lem : jointdef ] completes the proof of the claim . *the proof for the converse part follows from definition [ def : gencondgraph ] .+ to prove the direct part , for , such that we show that * * if , then since for , ( the same argument is valid if . ) . ** if , , then from we have we present a proof that establishes theorem [ capacityonecomplete ] using the canonical theory developed in . for the cardinality bound one should repeat the same argument as the one given at the end of the proof of theorem [ achiev ] .suppose there is a third transmitter who knows and sends some information with rate to the receiver .for this problem , the rate region is the set of achievable rate pairs . by intersecting this rate region with , we obtain the rate region for our two transmitter computation problem .consider the three transmitter setting as above .since is partially invertible , we can equivalently assume that the goal for the receiver is to obtain .this corresponds to in the jana - blahut notation , and , using ( * ? ? ?* theorem 6 ) , the rate region is given by the set of all such that for some that satisfies due to this markov chain we have intersecting with , from we derive that hence , using and , the last three inequalities in become which also imply the first three inequalities in. therefore , when the three last inequalities of hold and when , all other inequalities are satisfied .the rate region for the two transmitter problem thus becomes the set of rate pairs that satisfy for some that satisfies and .now , according to corollary [ lem : jointcompressionv2y ] , we have and taking completes the proof .
a receiver wants to compute a function of two correlated sources and and side information . what is the minimum number of bits that needs to be communicated by each transmitter ? in this paper , we derive inner and outer bounds to the rate region of this problem which coincide in the cases where is partially invertible and where the sources are independent given the side information . these rate regions point to an important difference with the single source case . whereas for the latter it is sufficient to consider independent sets of some suitable characteristic graph , for multiple sources such a restriction is suboptimal and _ multisets _ are necessary .
_ make everything as simple as possible , but no simpler . _-albert einstein .stars contain three dimensional ( 3d ) , turbulent plasma .they are much more complex than the simplified one dimensional ( 1d ) models we use for stellar evolution .computer power is not adequate at present for well - resolved ( i.e. , turbulent ) 3d simulations of _ whole _stars for _ evolutionary _ timescales .we attempt to tame this complexity by ( 1 ) use of 3d simulations as a foundation , ( 2 ) application of the reynolds - averaged navier - stokes ( rans ) procedure to these simulations to discover dominant terms ( closing the rans system ) , and ( 3 ) construction of simple physical models , consistent with the 3d simulations , for use in stellar evolution codes .we call this approach `` 321d '' because a central feature is the projection of 3d simulations down to 1d for use as a replacement for mixing - length theory ( mlt ; ) .the process is designed to allow testing , extension , and systematic improvement .formally , the rans equations are incomplete unless taken to infinite order ; they must be _ closed _ by truncation at low order to be useful .this need for truncation is due to the nature of the reynolds averaging , which allows _ all _ fluctuations rather than only _ dynamically consistent _ ones .closure requires additional information to remove these extraneous solutions .using 3d simulations avoids this problem by providing only dynamically consistent fluctuations . as a complement to the full rans approach, we consider approximations which focus on dynamics ; these provide a connection to historical work on convection in astrophysics and meteorology .such a minimalist step may be easier to implement in stellar evolutionary codes , and still provide physical insight . in the turbulent cascade ,kinetic energy and momentum are concentrated in the largest eddies .our approximate model contains both the largest eddies and the kolmogorov cascade .erika bhm - vitense developed the version of mixing - length theory used in stellar evolution in the 1950s , prior to the publication in the west of andrey kolmogorov s theory of the turbulent cascade .mlt might have been different had she been aware of the original work .edward lorenz showed that a simple convective roll had chaotic behavior ( a strange attractor , ) .ludwig prandtl developed the theory of boundary layers , as well as the original version of mlt .all these ideas will be relevant to our discussion , which is based , as far as possible , upon experimentally verified turbulence theory and 3d simulations , and free of astronomical calibration .( -1.5,-1.7)(2.5,0.7 ) ( -1.4,0.1)energy- ( -1.4,-0.15)containing ( -1.4,-0.4)range ( -1.4,-0.7)lorenz ( -0.,0.25)(-0.,-1.3 ) ( 0.1,0.1)inertial ( 0.1,-0.15)subrange ( 1.1,0.25)(1.1,-1.3 ) ( 1.2,0.1)dissipation ( 1.2,-0.15)range ( 0.5,-0.7)kolmogorov ( -0.7,0.45)(1.5,0.45 ) ( -1.4,0.45)larger ( 1.55,0.45)smaller ( -1.4,-1.2)integral scale ( 1.2,-1.2)small scale ( -1.0,-1.6)(0.2,-1.6 ) ( -1.4,-1.6)iles ( 0.9,-1.6)(1.9,-1.6 ) ( 2.0,-1.6)dns the 3d turbulent energy cascade is illustrated in figure [ cascade ] .the turbulent motion is driven at the largest scale ( the `` integral '' scale ) , which contains most of the kinetic energy .these motions are unstable and break up into smaller - scale flow patterns dominated by inertial forces ( the inertial subrange " ) .this continues to scales small enough for microscopic effects ( viscosity ) to finally provide damping of the flow at the kolmogorov scale .both the inertial subrange and the dissipation range are insensitive to the details of the boundary conditions at the integral scale , and are `` universal '' in this sense .we use the term universality " to mean the property of insensitivity to boundary conditions at the integral scale . found the striking result that the rate of dissipation is insensitive to the value of the viscosity , but is determined by the rate that the largest - scale flows feed the cascade ._ this behavior of the non - linear flow `` hides '' the microscopic value of the viscosity ._ we use kolmogorov theory to describe the flow in the range where universality holds . direct numerical simulations ( dns )resolve the small scales at which dissipation happens , and can extend up to the inertial range , but not to stellar scales .implicit large eddy simulations ( iles ) can extend from stellar ( integral ) scales down to the inertial range , but not to the dissipation range .[ cascade ] illustrates both .landau objected to the notion of complete universality on the grounds that the largest scales were subject to boundary conditions which would be specific to the case in question .we will incorporate this idea by splitting the turbulent flow into two parts : the integral - scale motion and the turbulent cascade . as an aid to understanding the integrated properties of the integral - scale motion , we are guided by the simplest model of a convective roll , due to .this model contains the famous lorenz strange attractor , and exhibits chaotic behavior .it also agrees surprisingly well with three - dimensional ( 3d ) simulations of turbulent convection associated with oxygen burning prior to core collapse .this approximation does lack multi - mode behavior , as compared with the simulations , which are dominated by five low order modes ( see fig . 1 in ) ; this may affect the accuracy of the representation of intermittency at large scales and of coherent structures .our challenge is to simplify this very complex problem , with time dependence and an astronomically large number of degrees of freedom , down to a feasible level for use in a stellar evolutionary code , _ without losing important features_. our approximation , 321d , is an attempt to increase physical realism at feasible cost in computational complexity .it is desirable to avoid astronomical calibration as far as possible , and base changes upon behavior quantified in laboratory and numerical experiments . in particular , we do not validate our approximation by how well it reproduces standard mlt results . by basing approximations on 3d iles simulations that ( 1 ) exhibit turbulence , ( 2 ) have non - uniform composition , and ( 3 ) resolve dynamic boundary behavior , it is possible to remove some of the vagueness inherent in many theoretical treatments of convection .we will compare the global properties of turbulent convection from numerical and analytical viewpoints in section [ sect2 ] , examine the structure and nature of boundaries of convection zones in section [ sect3 ] , and summarize our conclusions in section [ summary ] . in an appendix we provide a derivation from 3d fluid flow equations for some useful expressions . found that 2d simulations of stellar oxygen burning developed large fluctuations at the boundaries of the convective region . found that 3d simulations of the same stage gave no such boundary fluctuations . did both 2d and 3d simulations and showed that the discrepancy was due to a different choice of boundary condition : used rigid boundaries at the edge of the convective region , while the other simulations included dynamically - active stable layers surrounding the convection , a more realistic choice .nevertheless , all obtained a convective velocity of .the global character of the velocity field seemed to be insensitive to the details of the convective boundary , although these fluctuations are an important part of the physics of the boundary itself ( and the extent of the convective region ) .this insensitivity allows us to separate the global problem from the boundary problem ( see also ) ; in this section we focus on the global problem .the turbulent kinetic energy equation may be integrated over a convective region ; in the steady state limit this gives a global balance between driving on the integral scale , and dissipation at the kolmogorov scale ( see fig .[ cascade ] ) .this balance has been verified experimentally and numerically as a common feature of turbulence ( e.g. , ) .this introduces a length scale , the depth of the convective zone , into the problem . using a classical radiative viscosity , the reynolds number is at the base of the solar convection zone .numerical simulations and laboratory experiments become turbulent for roughly , so fluid flows in stars are strongly turbulent if , as we assume for the moment , rotational and magnetic field effects may be neglected . for homogeneous , isotropic , and steady - state turbulence ,the kolmogorov relation between the dissipation rate of turbulent kinetic energy per unit mass , velocity , and length scale is found that , where is the depth of the convective zone , and is the average convective velocity ; see their eq .6 and nearby discussion , and references to other studies which report such coefficients . for homogeneous , isotropic turbulence, predicted a coefficient for a region well away from boundaries .this factor of 0.8 might change for a strongly stratified region , which would have flow better described by plumes than convective rolls .[ kolmog_eps ] is a global constraint , averaged over fluctuations , and applies to each length scale in the turbulent cascade , so for all scales , or , so that the velocity variation across a scale is , which increases as .the largest scales have the largest velocities , and are dominated by advective transport ( macroscopic mixing ) .the velocity _ gradient _ across the scale is and increases with decreasing .the smallest scales have the largest velocity gradients , and are eventually dominated by microscopic mixing ( ionic diffusion , radiative diffusion , and viscosity ) .a description of the cascade needs both large and small scales ; eq . [ vel ] implies that the largest ( integral ) scales have most of the kinetic energy and momentum , while eq .[ vel_grad ] implies that the smallest scales have the fastest relaxation times , which is consistent with simulations ( e.g. , ) . , 32 , estimated the number of degrees of freedom in a region of turbulent flow to be .laminar flows with free boundaries become unstable at roughly .a direct numerical simulation ( dns ) would require well over zones to resolve the cascade for this marginally unstable case .using ( see section [ cascade_re ] ) , implies a need for more than zones for the sun , far beyond current computer capacity . there may be a smarter way .kolmogorov s great insight is that turbulence hides the details of the viscous dissipation by the nonlinear interactions of the cascade , so that the dissipation rate is determined by macroscopic parameters .simulations show a multimode behavior , but only dominant modes for zones .this dramatic reduction in complexity suggests the use of implicit large eddy simulations ( iles , see fig .[ cascade ] and ) which approximate small scale behavior by a kolmogorov cascade .our approach is to assume that this simplification holds for very large reynolds numbers , and to examine the consequences .simulations which are presently feasible have effective reynolds numbers limited by numerical resolution , but are sufficiently high to give truly turbulent solutions .state of the art simulations , with both improved algorithms and more powerful computers , support this approach .lllll dissipation length & & & & is convection zone depth + & & & & + horizontal gradient & & & & + + radial gradient & & & & + + imposed gradient & & & & + + convective velocity & eq . [ mlt ] & & & algebraic ( mlt ) versus ode + & & & & local ( mlt ) versus nonlocal + turbulent heating & none & ignored or& & + & & & & + + kinetic energy flux & assumed & assumed& & + & cancellation&cancellation & no cancellation , & + & by symmetry & by symmetry & asymmetry & + + buoyancy flux & & & & mlt ignores composition gradients + + enthalpy flux & & & & + + acoustic energy flux & none & none & & small for low - mach flow + + composition flux & undefined & none & & + + flux & undefined & none & & + as an aid to the reader , table [ table3 ] gives the correspondence of selected variables in three different theoretical approaches to turbulent convection : mlt , the lorenz model , and the rans formulation .mlt is 1d ( radial ) , the lorenz model is 2d ( radial and transverse ) , while the rans analysis is 3d projected to 1d .mlt is static , the lorenz model and the rans equations are time dependent .mlt is local ( no spatial derivatives of velocity ) while the lorenz model is mildly nonlocal ( it uses global derivatives over the roll ) , and the rans equations are non - local .comparison of mlt and lorenz gives a sense of transverse versus radial properties .in mlt the buoyant acceleration is approximately integrated over a mixing length to obtain an average velocity ( e.g. , ) , the superadiabatic excess is defined in table [ table3 ] and [ nonuniformy ] . here is the gravitational acceleration , is a thermodynamic variable ( for uniform composition ; see [ nonuniformy ] for the nonuniform case ) , is the local pressure scale height , and is an adjustable length scale ( the mixing length ) .[ mlt ] requires that for the velocity to be a real number .the velocity depends only on the local value of the superadiabatic gradient .there are obvious problems with regions in which such integration extends past a boundary .there have been a number of attempts to generalize mlt ; e.g. , , , , , , , , , , , etc .working backward , eq .[ mlt ] may be expressed as a co - moving acceleration equation for a vector field : where is a generalized driving term and a corresponding drag term ( , ch .v ) . a hydrostatic background will be assumed ; see appendix [ conv_append ] .similar equations result from ( 1 ) study of the nonlinear development of the rayleigh - taylor instability ( rti ) , and from ( 2 ) applications of reynolds - averaged navier - stokes ( rans ) analysis to 3d simulations of turbulent convection .if the driving is due to buoyancy alone , ( see [ nonuniformy ] for nonuniform composition ) , , then . if the drag is represented by , where , then we have this is basically a statement of newtonian mechanics , with driving by buoyancy and damping by drag . gives a historical context going back to and to .the early attempts , and many of the recent ones , have used a kinetic theory model , in which the mixing length was a sort of mean free path .in contrast , we interpret eq .[ gen_acc ] as a model of the momentum equation for fluid dynamics , involving structures such as waves , convective rolls , or plumes . because it is non - local , eq .[ mlt_acc ] allows formally stable regions to be convective , unlike mlt , because of finite velocities .this may be relevant for composition mixing in weakly stable regions , and the mass contained in convective regions .taking the dot product of eq .[ mlt_acc ] with gives a kinetic energy equation , for which the steady - state solution ) with the sign of the transit time and the deceleration .] is eq .[ mlt ] , with , and . in eq .[ mlt_ke ] , negative values of are allowed ; this permits buoyant deceleration .the singularities in mlt at the convective zone boundaries ( 9 in ) , and in boundary layers ( 40 in ) are removed .the flow is relative to the grid of the background stellar evolution model , so the co - moving time derivative of turbulent kinetic energy leads to where is a flux of kinetic energy .the generation of the divergence of a kinetic energy flux in this way is robust for dynamic models ; it occurs in the more precise rans approach ( eq . [ tke ] as well as eq .[ mlt_ke ] ) .we may write eq .[ gen_acc ] as in a steady state , the divergence of turbulent kinetic energy flux is zero only if there is a _ local balance between the driving and the drag terms_. otherwise turbulent kinetic energy flux may be non - negligible .the turbulent kinetic energy flux smooths the distribution of turbulent kinetic energy between regions in which it is generated in excess , and the whole turbulent region .the drag term is usually relatively smooth in comparison to the driving term , which can be strongly peaked .turbulent kinetic energy transport is especially important if convection is driven by cooling near the photosphere , so that the ( negative ) buoyancy is localized and the stratification is strong . have shown that stratification enhances the asymmetry in convective kinetic energy flux for driving from the top , and reduces it for driving at the bottom ; see also .this asymmetry is small for shallow convective zones , growing with stratification .this behavior does not occur in mlt , which enforces an _ exact _ symmetry between up - flows and down - flows so that .although simulations of 3d atmospheres exhibit strong downward ( negative ) net fluxes of kinetic energy , such information was not included in mlt fits for such atmospheres .simulations of 3d red - giant atmospheres by indicate that the fits to mlt require at least a two parameter family , as have simulations of deeper convection . in the red giant model in ,the downward directed kinetic energy flux reaches 35% of the maximum enthalpy flux . find that their solar model has a downward directed kinetic energy flux which is 10% of the enthalpy flux .this downward kinetic energy flux must be compensated for by a larger ( outward ) enthalpy flux .this kinetic energy flux is accompanied by a momentum flux , which affects the convective boundary , as shown in [ braking ] .these are nontrivial differences relative to mlt , and may have implications which are detectable with asteroseismology as deviations from the predictions of mlt models . at present, stellar evolution theory has no turbulent heating term .this is inconsistent with kolmogorov theory , which states that turbulent kinetic energy is fed back into the thermal bath at the rate given by eq .[ kolmog_eps ] . from the viewpoint of a dynamic model ( e.g. , eq . [ gen_acc ] ) ,this is a `` frictional '' cost of moving energy by convection . show that energetic self - consistency requires that the usual stellar evolution equations must be modified to include such a heating term , or equivalently , to explicitly include terms for heating by buoyancy work and divergence of kinetic energy and acoustic fluxes ( see , eq.20 - 22 ; , 21.5 , 21.6 ) .the kolmogorov term appears as heating in the internal energy equation and cooling ( damping ) in the turbulent kinetic energy ( acceleration ) equation .total energy is conserved ; turbulent kinetic energy is transformed into heat .it may be more convenient to apply the heating term directly , rather than use the buoyancy work and divergence of turbulent kinetic energy and acoustic fluxes , as the velocity is available from solution of eq .[ mlt_acc ] .turbulent heating ( and divergence of kinetic energy flux ) may have implications for the standard solar model and solar abundances ) extends the well - mixed region beyond the conventional schwarzschild estimate ; these effects would modify the solar model in the same sense . ] .such heating may also be important for the motion of convective burning shells into electron - degenerate fuel . in the _ local , steady - state , _ limiting case , the left - hand side of eq . [ mlt_ke ] vanishes , and an equation similar to eq .[ mlt ] results , but with a turbulent damping length instead of a mixing length . in simulationsthis is the lesser of the depth of the convective zone or pressure scale heights ; see [ rti ] ] . with this change , _ the cubic equation of bhm - vitense may be derived _ , and we recover a form of mlt . had it been available , bhm - vitense might have identified the mixing length with the kolmogorov damping length ( eq . [ kolmog_eps ] ) .however , kolmogorov found the damping length to be the depth of the turbulent region , so that it is not a free parameter , unlike mlt . there is a further issue : is the _ average _ dissipation rate , not the instantaneous local value ( ) which fluctuates over time and space ( see fig . 4 in ) ; that is , except on average .this is reminiscent of the rans approach ( [ fluctuations ] and [ tke_subsection ] ) .suppose we assume that the integral scale motion is that of a 2d convective roll , where is given by eq .[ mlt_acc ] . using this and a corresponding thermal energy equation, we obtain a form of the classic lorenz equations , but with a nonlinear damping term provided by the kolmogorov cascade ._ because of the time lag , as implied by the time needed to traverse the cascade from integral to kolmogorov scales , the modified equations are even more unstable than the original ones , and have chaotic behavior_. in eq . [ mlt_acc ]it was assumed that the density fluctuation which drives the buoyancy could be represented by , involving only a fluctuation in temperature .this is only true for uniform composition and mild stratification .the formulation makes use of the expansion of pressure fluctuation , which may be written as where here is the sound speed .the composition variable denotes the number of free particles per baryon , and is essentially the inverse of the mean molecular weight .an illustrative and simple example is the ideal gas , . for subsonic flows , , where is the mach number of the flow , and is small fails because pressure fluctuations provide the transverse acceleration necessary to divert the flow ;see [ braking ] . ] . in mlt, the pressure fluctuation is assumed zero ( no acceleration by pressure dilatation ) , so and it is further assumed that to obtain eq . [ mlt ] . even in the limit of negligible pressure fluctuations , _ variations in in a way similar to variations in _ , so even small composition variations can be significant when superadiabatic temperature variations are also small .many of the difficulties found using mlt are related to situations in which : overshooting , semi - convection , and entrainment .there seems to be a deep connection between eq .[ mlt_acc ] , rayleigh - taylor instabilities ( rti ) , and turbulent mixing . an almost identical equation ( eq . 4.1 in )is used to describe the nonlinear development of the rti into the turbulent mixing regime .unlike canonical kolmogorv turbulence , the rt turbulent mixing is statistically unsteady , and involves the transport of potential and kinetic energies as well as enthalpy .because of its importance in a variety of high energy - density ( he d ) conditions , much experimental effort for its study as well as an extensive literature have developed .the rti happens when a heavier fluid overlays a lighter one , proceeding from linear instability of perturbations , to mildly nonlinear motion of bubbles and spikes , and then to nonlinear turbulent mixing .the initial acceleration is one - dimensional , but as instability develops , the motion breaks symmetry and approaches isotropy ( as seen in a co - moving frame ) , much like the cascade in steady turbulence .the essential difference between stellar convection and rti is that the rti is not contained , while convection operates within a definite and slowly varying volume .this means that the vertical and the transverse scales are causally connected in convection , but may be independent in the rti .inconsistency between experimental and numerical investigation of the rti in the nonlinear regime led to the problem .the rti in the limit of strong mode - coupling can be initiated to have self - similar evolution , so that the amplitude ( diameter of the bubble ) evolves as , where a is the atwood number ( density ratio , ) , is gravity and the elapsed time .the simulation value is smaller than the experimental value .this discrepancy seems to have been resolved by the idea that unquantified errors in the experimental initial conditions were the cause . to the extent that such uncertainties can not be precisely known , this suggests a statistical approach , and illustrates the need for combined theoretical , experimental , and numerical studies . found that regions of their simulated convection zone exhibited recurring bursts " of convection ( see their fig .these bursts , although multi - modal ( ) , seem to share the chaotic behavior of the model of a single - mode convective roll .this encourages the use of eq .[ mlt_acc ] , which is related to the momentum - driven model of rti , for timescales less than or of order of the transit time . for longer , evolutionary timescales ( stellar convection )we need to average over fluctuations , which means averaging over several transit times for the convective roll ( see eq .[ tke ] below ) .these bursts result from underlying physics similar to that in the rti ; their short timescale behavior may be relevant for stellar pulsations and eruptions ( the -mechanism , , or equivalently , stochastic excitation of oscillations , ) .the weak coupling between driving at the large scale , and dissipation at the small scale , allows time dependent fluctuations of significant amplitude in luminosity and turbulent velocity .the term ( eq . [ gen_acc ] and eq . [ steady_u ] ) is needed for chaotic fluctuations and wave generation .these fluctuations have a cellular structure in space and time ; if there are many cells , with random phases , the fluctuations in the average total luminosity are reduced by cancellation .fluctuations are fundamental features of turbulence and mixing . because of sensitivity to initial conditions which can never be known with complete accuracy, descriptions of turbulence should be statistical in nature , even though the equations are deterministic .turbulent simulations can be said to be numerically converged only in a statistical sense .eventually trajectories will diverge .lyapanov exponents characterize this divergence , a feature characteristic of turbulence which makes turbulent mixing so effective .unlike the diffusion picture , in which a stellar mixing front moves radially , limited by the random walk of mean - free - path strides , turbulent mixing involves a network of trajectories throughout the space of the turbulent region , laced with inhomogeneities , which finally disappear at the kolmogorov scale . in stratified regions ,mass conservation constrains the flow , but it tends to change the cross - sectional area of the plumes as opposed to limiting their range . although the flow is locally wild with fluctuations , these tend to cancel upon horizontal and time averaging , leaving a much more placid behavior due to the cancellation of random phases .[ fig_fluct ] illustrates this for a particular but representative case ; the velocity in the theta direction , , is shown as a function of radius , from the oxygen burning data set in .the top panel shows the instantaneous value of ( in units of cm / s ) for a sequence of time steps .the bottom panel shows the running average ( a horizontal average , i.e. , over a spherical surface of radius ) of the same variable over 300 such time steps ( 150s ) , stepping forward over 20 time steps ( 10s ) at a stride , on the same velocity scale .the amplitude in the bottom panel is much reduced by cancellation ; what does remain is the larger length scale , as suggested by the cascade idea discussed in [ cascade_re ] .the cancellation does not work for quadratic terms ; they remain non - zero , e.g. , contributing to the rms velocity in this case ( see [ tke_subsection ] ) . the product of fluctuations in velocity and temperature give rise to the enthalpy flux ; those in velocity and composition give rise to the composition flux. a stellar evolution code must step over the shorter turnover time scales ( weather ) to solve for the evolutionary times ( climate ) .how can this be done ?it requires an average over active and inactive cells .the steady - state limit of the lorenz equation seems to give a reasonable approximation to its average behavior , filtering out the chaotic fluctuations . instead of ,we use we apply the same approximation ( eq . [ steady_u ] ) to eq .[ mlt_acc ] for slow stages of stellar evolution .this allows non - local behavior , will prove important for our discussion of convective boundaries later in [ sect3 ] , and can represent ram pressure ( reynolds stress ) and the flux of turbulent kinetic energy ; see also 3.2 in , for a discussion of ram pressure in 3d simulations relative to mlt .now we have established connections between an acceleration equation ( eq . [ gen_acc ] ) and ( 1 ) mlt , ( 2 ) historical attempts to extend mlt , ( 3 ) modern research on rti , ( 4 ) the important advances of and , and ( 5 ) a rational way to step over fluctuations for stellar evolution .a more rigorous alternative is to use the reynolds - averaged navier - stokes ( rans ) approach , which directly averages the fluctuations over space and time .this has been explored by canuto , see also ; a detailed comparison with their work , while desirable , is beyond the scope of this paper .canuto uses simulations and experiments from geophysics to effect a closure of the rans equations , while in contrast , our closure of the rans is based on our 3d simulations .the turbulent kinetic energy equation ( tke ) is obtained by a reynolds decomposition of the velocity , density , and pressure ( detailed discussion may be found in ) . in principlethe tke is exact ; errors arise from closure , i.e. , our analytical approximations to the terms in the rans equations are at fault .well - resolved 3d iles simulations show excellent agreement with the tke , and allow the dominant terms to be identified .being more general than the simpler approximations discussed above , the tke allows us to identify and quantify neglected terms .most importantly , it allows an enormous simplification and compaction of the 3d numerical data , while that data in turn allows a closure of the rans procedure .the tke may be written as : we use and to denote angular and time averages of a quantity .primes refer to fluctuating quantities ; for example , and , and similarly for the time average .the turbulent kinetic energy per unit mass is , a measure of the rms turbulent velocity .the acoustic and turbulent kinetic fluxes are and .the dissipation may be written as a form which we identify with eq .[ kolmog_eps ] , the expression of ; notice that it involves averages of powers of the velocity fluctuation , not the instantaneous values . usingthe rans approach is equivalent to using the bottom panel in fig .[ fig_fluct ] rather than the top ; it removes the fluctuating activity which cancels ( has no net effect ) , while keeping what does not cancel . to better understand the implications of the tke , consider ( 1 ) a steady state ( ) with ( 2 ) no background motion ( ) .then the tke reduces to the divergence of the fluxes balancing the net result of two source terms and , and a damping term : this may be integrated over the convection zone ( taking the surface fluxes to be zero or small at the boundaries ) , and if we ignore the pressure dilatation for the moment , gives an expression for the damping length , which is a global condition that must be satisfied to be consistent with kolmogorov damping , which also requires that is approximately the depth of the turbulent region .this characteristic length scale is a fundamental property of turbulence , and is generated robustly in the numerical simulations .. [ ell ] might be regarded as a generalization of the integral constraint to include damping by turbulence .notice that , which appears in both eq .[ mlt_acc ] and eq .[ ell ] , must be solved for consistently ; it tends to be a slowly - varying function , of order of the convective zone depth .[ ell ] involves some of the important `` bulk '' properties discussed by , and is a statement of a global balance between driving and damping .what approximations would be necessary to make the tke equation equivalent to mlt ?in mlt , ( 1 ) the net flux of turbulent kinetic energy is defined to be zero by symmetry , ( 2 ) pressure fluctuations are ignored so the acoustic flux and pressure dilatation are zero , and ( 3 ) the damping length is taken to be an arbitrary adjustable parameter . enforcing these gives this is the local version of the global balance in eq .[ ell ] ; it is equivalent to the bhm - vitense cubic equation of mlt for the appropriate choice of mixing length .this approximation leads to a series of errors : ( 1 ) symmetry between up - flows and down - flows is broken by stratification , so that turbulent kinetic energy fluxes are not generally zero .this is a _ qualitative _ error .( 2 ) pressure fluctuations may not be ignored for strongly stratified convection zones .this is a _ quantitative _ error . find that acceleration by the pressure dilatation term is comparable to that from buoyancy .( 3 ) the damping length may not be freely adjusted if the relation of is to be satisfied .such adjustments are usually necessary to compensate for a lack of non - locality in atmospheres due to the lack of ram pressure , and deeper into interiors due to a lack of kinetic energy flux ( the two parameters discussed in regard to 3d atmospheres in [ dynamics ] ) . our efforts have been three - fold : ( 1 ) construction of accurate numerical solutions of the navier - stokes equations which exhibit turbulence , ( 2 ) theoretical analysis of these solutions in the rans framework to determine the most important features , and ( 3 ) invention of simpler analytic representations which capture the essential features of the numerical solutions . presented a novel analytical theory of convection in stars which does not contain a mixing - length parameter ; this is an alternative to ( 3 ) above , and it is of interest to compare how well it agrees with both our numerical solutions ( 1 and 2 ) , and our analytic approximations ( 3 ) . as we have shown in [ tke_subsection ] , the natural length scale for convection is the dissipation length for the turbulent cascade .part of the foundation of the model of is the use of potential flow and the bernoulli equation ( , eq .10.7 in 10 ) , which result from the euler equation , not the navier - stokes equation .their theory seems to be equivalent to assuming the process occurs on a scale much less than the size of the convective region , so that there is no way to define a length scale for turbulent dissipation .in contrast , following kolmogorov ( [ cascade_re ] ) , the length scale in our theory is the size of the turbulent region , which is not arbitrary but determined by the turbulent flow .our length scale is not an assumption ( as in mlt ) but a consistent and robust result of our simulations .it is the length scale over which driving and damping of turbulence balance ( [ dynamics ] ) . in order to describe the turbulent cascade ,a complete theory must deal with the whole turbulent region .is the theory of physically correct ?stellar interior convection is extremely turbulent , so the question becomes : what are the errors introduced by ignoring turbulence ? give a careful discussion of the applicability of potential flow ( their 9 ) , and they note that the validity of bernoulli s equation is limited because of the formation of boundary layers in which viscous effects must be included ( see also ) .stars have large reynolds numbers , so that turbulent boundary layers form ( , chap .iii ) , as they do in our simulations ( fig .[ qvsr ] ) .the pasetto theory , like mlt , ignores boundary layers and turbulence , as well as composition gradients .a basic assumption of the theory is that velocities of lateral expansion are much larger than those of the vertical rise of convective elements ( their 4.2 ) .however , the simulations show average velocities in the turbulent region which are not strongly biased toward the laterial directions ; this was already clear in , ( their fig .6 ) , and has held true for subsequent simulations with refined resolution . the rms velocity in the radial direction is actually _ larger _ than the lateral rms velocities , rather than smaller . a key test presented in of their theory is a comparison with mlt at , well inside the super - adiabatic region ( sar ) at in the sun .it is the inefficient convection in the sar which determines the solar radius in calibrations of stellar evolutionary codes , so that a test in the sar would be instructive . state convective elements in this region have low thermal capacity , so that the super - adiabatic approximation can no longer be applied , and the temperature gradient of the elements and surrounding medium must be determined separately " .the theory in its present form may not yet be applicable to the sar .the value of the pasetto theory may prove to lie in its significant conceptual differences from mlt , and in its use as a null case to provide insight into the effects of turbulence .it has been assumed that because deep convection is adiabatic , mlt may be used without problem for standard stellar evolution in deep interiors .this ignores the effects of the velocity field .realistic boundary physics requires more than the adiabatic assumption ; it requires dynamics to define the boundary , and hence the size of the convective regions . because , unlike mlt , eq . [ mlt_acc ] and its variants have a spatial derivative , _ the edges of the convective zones may be found by simply integrating the acceleration equation to find the zeros of the velocity . _ in this section we begin by discussing several issues related to boundaries .we stress the importance of pclet number variation ( [ peclet_num ] ) .we critically review current practice regarding artificial diffusion , real diffusion , semi - convection , and imposed boundary criteria ( [ eggleton_diff ] , [ michaud ] , [ semiconv ] , [ imposed_bnd ] ) .then we discuss the similarities and differences between convection in stellar atmospheres and deep interiors ( [ atmos_conv ] ) . in [ deep_conv ] we present new numerical results concerning convective boundaries ( the development of braking regions , which do not appear in mlt ) . in [ braking ] we then analyze these results , showing that they emerge from simple considerations of physics , which may be used to construct approximations for use in stellar evolutionary codes . for the oxygen - burning shell, the temperature has an abrupt jump inside the mixing region ( radius in fig .[ qvsr ] ) .pressure is continuous through the boundary containing this transition , so that the density curve has a corresponding dip ; see fig . 2 in or fig . 5 in implies a steep increase in entropy ; as evolution continues this entropy jump grows , and the transition region narrows .such steep gradients in are a consequence of cooling by neutrinos .they are not seen in earlier , photon - cooled stages of evolution and can only be supported for times short compared with timescales for thermal diffusion and electron heat conduction .this is easily the case for oxygen burning because of high opacity and short evolutionary times ( ) .the pclet number is defined as the ratio of the advective transport rate to the diffusive transport rate of the physical quantity being transported , which here we take to be thermal energy , so in oxygen burning , radiative diffusion is slow while advection occurs rapidly , giving large pclet numbers ( formally infinite since radiative diffusion was small enough to be neglected in some simulations ; the infinity results from the denominator in the definition being a negligible term , not from any exceptional behavior of the physics ) .this contrasts with the situation in stellar atmospheres , in which the radiative diffusion becomes faster than advective transport , so that .this difference in pclet numbers suggests the possibility of a _fundamental flaw in the notion that observations of stellar atmospheres may be sufficient to define the nature of deep stellar convection ._ see discussion in ; .peter eggleton took an early step in dealing with steep gradients in composition , with the introduction of a diffusion operator which he stressed was ad - hoc .this numerically advantageous procedure has been widely adopted for stellar evolution , even though it has the potentially worrisome mathematical property that it increases the order of the spatial derivatives in the equations to be solved .the equation is where is the mass fraction , is the lagrangian mass coordinate , is the effective diffusion coefficient , and is the nuclear reaction network matrix .this is equivalent to modeling convection as `` turbulent diffusion . ''the left - hand side is the heuristic diffusion operator ; the right hand side is the reaction network operator .the actual composition flux is related to the co - moving derivative on the right - hand side ; see , 4.6 .eggleton integrates over the convection zone to eliminate that spatial derivative ; usually it is simply ignored in stellar codes .the eggleton approach is equivalent to approximating the composition flux by a `` down - gradient '' expression ( critically discussed by ) , direct comparison with simulations shows that this can be qualitatively wrong ( by two orders of magnitude ) . for a contact discontinuity ( , 81 ) , , as in eq .[ yflux ] , not , as in eq . [ downgrad ] .proper scaling requires that at a boundary if eq .[ downgrad ] is used .as eggleton intended , the algorithm smooths steep gradients , but sometimes faster than real physical processes do , as eggleton warned . to the extent that gradients in abundance need to be correctly represented ( e.g. , for ionic diffusion , or density structure ) , the down - gradient approximation ( in eq . [ eggleton ] and eq . [ downgrad ] ) , is questionable .in particular , fluxes directly computed in simulations show that _ the down - gradient approximation fails in boundary layers _ .while real atomic ( ionic ) diffusion is thought to be slow in stars , the diffusion operator is second order in space derivatives , so that it becomes important in steep composition gradients , i.e. , boundaries .georges michaud has led in the application of true diffusion processes and radiative levitation to stellar evolution .recently these processes have been applied to horizontal branch and sdb stars .gravitational settling and radiative levitation are important to ( 1 ) recover the iron - group opacity bump that excites the pulsations in those stars , ( 2 ) obtain the correct position of the instability strip in the diagram , and ( 3 ) help in understanding their observed atmospheric abundances . because the diffusion uses a difference operator similar to that for ionic diffusion ( second order in space ) , and may reduce the gradients which drive that diffusion, care should be taken that the algorithmic diffusion does not cause errors in the real diffusion ( e.g. , see ) . in stellar physics ,the idea of semi - convection has spawned various algorithms ( e.g. , ) , some of which seem to be physically and numerically inconsistent with others .the term `` semi - convection '' refers to a mixing process which occurs in a region that is stable according to the ledoux criterion but unstable according to the schwarzschild criterion .it generally is thought to involve mixing of composition , but not significant enthalpy .the composition profile may be adjusted to marginal stability according to the ledoux criterion .semi - convection is also often discussed as a double diffusive instability , involving an interaction between radiative diffusion and ionic diffusion .although both radiative and ionic diffusion may be included in a 1d stellar code , this does not capture their interaction and 3d dynamics .semi - convection may be related to oceanic phenomena ( thermohaline mixing ) in which heat flow and salt concentration play the doubly - diffusive roles , and which have a long history of study ( e.g. , see chap . 8 in ; ) . an extensive discussion with numerical simulations based on the oceanic model , and conclude that , while the problem can be solved in the planetary range of parameter space , the stellar case requires a large extrapolation .this difficulty may be further exacerbated by the indication that many such regions in stars are bathed in a flux of g - mode waves , which are a nonlocal effect that may complicate the analysis in a nontrivial way . even with these uncertainties , there are energetic constraints ( see eq . [ e_mix ] ) which must be obeyed .the amount of mixing possible is limited by the energy available to mix , which is generally taken to be related to the excess , so that luminosity is used to supply the energy required to mix .mlt , as a local theory , must be supplemented by additional assumptions about behavior at the boundaries of the convection zone .these are usually discussed in terms of _ linear _ stability theory , i.e. , in terms of the ledoux and the schwarzschild criteria being positive .the schwarzschild criterion for convective instability is defined by here is what the dimensionless temperature gradient would be if all the luminosity were carried by radiative diffusion and is the adiabatic gradient ( see appendix ) .the ledoux criterion for convective instability has a composition dependence , and is defined by the last term is written as by , 6.1 , their eq .the factors are defined as in [ nonuniformy ] above .notice that positive and positive both inhibit mixing .neither of these choices seems satisfactory .they have no dependence upon the vigor of the flow on the unstable side of the boundary , which clearly must make a difference .linear perturbation theory examines the instability of a stable region , treating both sides of the boundary equally . in realitythey differ : one side is convective .the stiffness of the non - convective side is measured by the brunt - visl ( buoyancy ) frequency , ( see eq . 6.18 in , and eq . 3.73 in ), where is the frequency of elastic rebound from a perturbation ; it is imaginary in convective regions . here is the dimensionless temperature gradient relevant ) . ] to the perturbed element . on the non - convective side of the boundary, it may be the same as above , giving the second equality , which refers to the tendency to restore stability in the radiative region .a delicate point is the value of near the boundary . by what mechanism does mixing occur ?what is the structure of the partially mixed region of transition between well - mixed and unmixed ?present practice in stellar evolution is to use the schwarzschild criterion , which has no , so that these issues may be ignored , or to use the ledoux criterion with one of the prescriptions for semi - convective mixing ( see [ semiconv ] ) .such interfacial issues have long been studied in the fluid dynamics and geophysics communities ; see for an extensive discussion .the richardson number is defined as some measure of the linear condition for ability of a layer to resist shear is the `` gradient '' richardson number . is stable ; larger stiffness ( ) and less swirling ( ) tend toward stability . in their discussion of entrainment, used a `` bulk '' ( i.e. , non - local and non - linear ) richardson number which involved an integral over the region around the boundary . in the absence of global rotation ,a layer having constant total entropy is energetically neutral with regard to mixing .if after a mixing episode , the luminosity returns to its value for radiative balance ( is unchanged ) , then the additional energy required to remove the stable compositional stratification is both and are intrinsically negative in stars .if this energy changes sign , mixing may occur which is driven by the gradient in composition . using a * specific * kinetic energy of ,a richardson number may be constructed , here the traditional is a plausible condition for stability , at least roughly . in their pioneering work on solar convection , carefully explored the topology of convective flow below the photosphere : converging , cool downdrafts being dominant , with radiative cooling providing the entropy deficit which drives the circulation . examined shallow ( weakly stratified ) convection , driven by atmospheric cooling , and emphasized the importance of the atmosphere in determining the nature of the convection zone . as deep interior convection has no atmosphere , atmospheric physics can have no strong role there ( the circulation is driven by nuclear burning ) .furthermore , the bottom boundary , which could be ignored in the simulations of , may be important for the detailed effects of solar convection on the interior . , 11 , showed that , for stellar interior models , the atmosphere could be represented by an entropy jump between the photosphere and the adiabatic ( deep ) convective region .this entropy jump is a primary parameter for determining the depth of the convection zone .the atmospheric model is crucial for predicting spectral features for a given entropy jump , but has a weak influence on that entropy jump itself .many features of the atmospheric and deep interior simulations are similar , leading to the idea that atmospheric physics , however crucial for spectral formation , may be treated as a boundary condition issue rather than a key feature of deep turbulent convection . showed that the general characteristics of the flow in solar convection ( narrow , fast down - flows with broad , slow up - flows and acceleration by pressure dilatation , ) , require only localized top cooling and stratification .global simulations of the solar convection zone are necessarily less well resolved for comparable computational resources ; the simulations of are beginning to show turbulence , but may require finer zoning to deal with some details of the turbulent flow ( e.g. , ) .lllll mass & & 0.9205 & 0.0161 & 0.1150 + depth & cm & 4.460 & 0.078 & 0.587 + kinetic energy & ke/ & 8.608 & 0.255 & 0.561 + buoyancy luminosity & / s & 4.576 & -0.0342 & -0.0492 + pressure & & 2.032 & 0.046 & 0.228 + number of zones & & 236 & 8 & 23 + the simplest of stellar convection zones are cooled by the local processes ( cooling by neutrino emission and heating by nuclear burning ) , rather than the non - local processes ( radiative transfer ) , giving a cleaner example of the dynamics of boundaries for deep convection .a slightly more complex case is a convection zone with heat conduction by radiative diffusion ; consider both .these two cases cover almost all of the conditions relevant to stellar evolution , except the outer layers simulated in 3d atmospheres . for the oxygen - burning shell, some integral properties of the main convective region and the braking layers are summarized in table [ table2 ] .about 14 percent of the mass and 15 percent of the thickness of the total convection zone are in the boundary layers ( upper bl and lower bl ) , as is 8.5 percent of the turbulent kinetic energy .these boundary regions provide deceleration ( braking ) of the vertically directed flow , allowing it to remain bounded by the convective volume .if the buoyancy flux is , then the rate at which turbulent kinetic energy increases due to buoyancy in a region , is which is positive in the middle region , but negative in the boundary regions .these regions of negative buoyancy are a robust qualitative feature of the simulations , dating back to early 2d work . in the oxygen - burning shellthey reduce the driving of turbulent kinetic energy by only 1.8 percent .table [ table2 ] shows the depth of each region in pressure scale heights ( ) .the depth of the boundary zones is not a universal constant in , but varies by a factor of 5 between top and bottom .the last line gives the number of zones in each region for `` medium '' resolution ; the lower boundary region is most demanding , having a steep transition from convective to stable stratification .little of the kinetic energy is lost in the boundary regions , so provides a good first estimate of the rate of generation of turbulent kinetic energy .these regions contain of the mass in the convection zone " ; most of this comes from the upper layer , which has less extreme stratification .[ qvsr ] shows the buoyancy flux versus radius , averaged over 100 seconds , for the oxygen - burning shell simulation ( ob ) ; more detail may be found in . the buoyancy flux, is the rate of work done by gravity .it is the rate of flow of buoyancy , and has units of energy per unit mass per unit time ( e.g. , erg / g / s ) . over most of the convective regionit is proportional to the enthalpy flux .[ qvsr ] shows that the convective zone simulation is naturally split into three regions , separated by two boundaries .the regions above and below are stable .the middle region is relatively uninfluenced by the boundaries ; it is characterized by positive fluxes of buoyancy and of enthalpy , that is , a positive superadiabatic gradient " .it is convectively unstable according to both the schwarzschild and the ledoux criteria . with an appropriate and eq .[ ell ] for an explanation of `` appropriate . '' ] choice of mixing length , this middle region can be reasonably well approximated by mlt .mlt works poorly for the bottom and top boundary layers , which have negative values of .while the central region is defined by positive buoyancy , and positive enthalpy flux , outside the convective zone these quantities are zero , and in the boundaries they are negative . in mltthis is impossible because it would imply that the velocity in eq .[ mlt ] is imaginary , but in eq .[ mlt_acc ] merely implies buoyancy braking , hence the labels braking " in fig .[ qvsr ] . has summarized ; this is a nice prediction of some of the features later revealed in 3d simulations . ]the issue of negative buoyancy and convective flux in connection with penetrative convection . have discussed the overshoot at the bottom of the solar convection zone in the context of convective plumes and magnetic dynamos , and have discussed this in the context of solar rotation and the tachocline . in stellar evolution theory ( i.e. , mlt ) the existence of these braking regions is obscured by use of the schwarzschild ( or ledoux ) linear stability criterion .these braking layers are related to issues of overshoot and penetrative convection .the braking layers are not a part of mlt but , as we shall see ( [ braking ] ) , arise naturally from eq .[ mlt_acc ] .[ uhi - res_qvsr ] shows the inner braking zone ( the region of negative buoyancy work ) at to cm ) .the `` hi - res '' case of ( zones ) and a still higher - resolution case of ( zones ) are shown . in comparison with fig .[ qvsr ] , the negative spike " is now well - resolved .a detailed analysis of these simulations will appear elsewhere .the degree of numerical convergence is promising , and we conclude that such _ braking zones are a robust feature of well - resolved simulations of neutrino - cooled stellar convection . _ ) at lower shell boundary for oxygen burning , versus radius .this shows the `` hi - res '' case of ( ) and a higher - resolution case ( ) .the braking zone is indicated by negative buoyancy work at to cm ) .compare to fig .[ qvsr ] , which shows both the upper and lower boundary for the `` medium - res '' case .there is a steady convergence toward a common asymptote as resolution increases , and the two cases shown here are virtually identical , except for small variations in averaging due to differences in time step size . ]the radial velocity becomes small in the braking region , while the transverse velocity extends deeper before it also becomes small .the convective motion turns , and a small ( mostly g - mode ) wave velocity remains .the composition gradient is steeper than would be predicted by algorithmic diffusion ( eq .[ eggleton ] ) , and begins at the bottom of the braking region .the boundary composition profiles are smooth and self - similar when time - averaged .this suggests that the turbulent spectrum has a consistent net effect on the composition profiles and on the mixing , and therefore this interface should be amenable to approximation over time - steps in 1d evolutionary calculations . for oxygen burning ,the composition gradient in the boundary layer is not well - represented by conventional turbulent diffusion theory which requires a span of many turbulence mean - free - paths " per density scale height for validity . in mlt ,the span is a fraction of a scale height ( see in table [ table2 ] ) for oxygen burning .the small length scales are accompanied by small time scales for change , so that a steady state model may be appropriate .fluid motion in a star may be separated into two fundamentally different flows : solenoidal flow ( divergence free : ) and potential flow ( curl free : ) , which together represent the helmholtz decomposition of an arbitrary vector field .potential flow is associated with wave motion and solenoidal flow ( vorticity ) is a feature of turbulence .a striking separation in the nature of the flow is visible at boundaries between these types of flow ; see the discussion of boundary layers in , and fig .19 in .this separation in types of flow is closely related to wave generation and propagation .the structure and nature of these boundary layers is important for estimation of the rate at which turbulent flow moves into or from non - turbulent regions the growth and recession of convective zones . had about 8 zones across the lower boundary layer for `` medium '' resolution ; see also . had double the resolution across the convective zone ( twice as many radial points ) , but the boundary layer became physically narrower .recent simulations at still higher resolution ( see fig .[ uhi - res_qvsr ] and ) show that the lower boundary layer has about 20 zones and the same physical depth .the computed entrainment rate may be affected by numerical viscosity , so that lower resolution simulations will give overestimates .the `` medium '' resolution of was sufficient to give numerical viscosity ( reynolds number ) similar to that of laboratory experiments on entrainment , but not of stars .coarse resolution in those simulations may have been a partial cause of the difficulties found by in an attempt to apply the entrainment rates of for oxygen burning directly to main sequence stars .the real entrainment rates for stars should be smaller .another issue is that oxygen burning and hydrogen burning have very different pclet numbers , which can affect the entrainment rate ( see below ) .here we construct a simple but dynamically consistent picture of a convective boundary .this is illustrated in fig .[ bnd2_fig ] , which shows the driving , turning , shear and stable regions . at its most elemental level, the velocity vector must turn at boundaries ; that is , _ flow must turn back to stay inside the convective region_. we do _ not _ assume that `` blobs '' disappear ( like mlt ) .most of the momentum is contained in the largest scales , so we focus on the average dynamics at these scales , and the simplest flow patterns .( -10,1)(20,17 ) ( -8,0)(-8,17 ) ( -8,6)(14,6 ) ( -8,10)(0,10 ) ( 0,6)4 - 9090 ( -8,2)(0,2 ) ( -9.6,6)0 ( 14.5,6) ( 0,6)(4,7 ) ( 1.7,7.7) ( -10,14) ( -7,12) ( -1.2,12)(0,12 ) ( 0,15)(0,0 ) ( -1.,4) ( 4,15)(4,0 ) ( 8,15)(8,0 ) ( 3.7,2.5) ( 6.5,2.5) ( 6.3,12.) ( 6.,12)(4,12 ) ( 10.3,4.) ( 10,4.)(8.2,4 . ) ( 10,16)``stable '' ( 4.5,16)shear ( 0.5,16)turn ( -6,16)driving the magnitude of the acceleration required to turn the flow is just the centrifugal value where is the radius of the turning region and the relevant velocity . using eq .[ mlt_acc ] in the steady state limit , and taking , the radial component of the acceleration equation becomes where is the acceleration due to buoyancy and pressure fluctuations ( eq . [ gen_acc ] , and [ a_momentum ] ) .so far we have considered the top of a convective zone ; the bottom of a convection zone behaves similarly if care is taken with signs .simulations show a consistent pattern in velocity and composition structure in the boundary layers .moving toward the boundary from the interior of the convection zone , we find ( 1 ) the radial velocity decreases , ( 2 ) the pressure fluctuations increase , and ( 3 ) the transverse velocity increases to a maximum and then decreases , joining on to a finite and small rms velocity due to wave motion .the transition to small rms velocity occurs at about the same point that the composition changes from being well - mixed to supporting a radial composition gradient .this pattern holds for both top and bottom boundaries .the dynamical equations we use are derived in appendix [ conv_append ] .we use [ a_momentum ] , the same quasi - steady state and thin shell ( ) approximations , and choose an inertial frame in which a hydrostatic background is assumed . near the boundary ,the radial component of the acceleration is essentially just the buoyancy force ( the first term on the rhs ) is parallel to the gravity vector , which is radial , and provides no transverse acceleration .baryon conservation implies that this reduction in the radial velocity alone will give an increase in density ( matter accumulates ) , which gives an increase in the pressure fluctuation as the boundary is approached .the two transverse components of velocity satisfy the transverse motion requires a transverse acceleration which is provided by a pressure excess ( see also ) at the point of contact of the plume with the boundary ( note the similarity to the rti , [ rti ] ; and ) .this same pressure excess also implies a radial acceleration of the boundary , making the boundary undulate .in addition to the horizontal force from the pressure excess , the buoyancy force is negative , so the net effect on the flow is to complete the turn .the turning region has a width ; this material is well - mixed because it moves back into the convective region after it turns .thus the region might be termed the `` over - shoot '' region , and we are discussing the dynamics of overshoot " .[ uhi - res_qvsr ] shows our highest resolution simulation of the most demanding boundary ; does this simple model of boundary dynamics work for it ?the orientation is reversed for the bottom boundary , so in this case .the steep drop in buoyancy work at corresponds to and the shear " region in fig .[ bnd2_fig ] , which can maintain a composition gradient because the velocity is due to wave motion . at the radius ,at which the radial component of the velocity is , the flow is transverse to the radial coordinate ( ) , so there is a shear layer at this surface which will be unstable to the kelvin - helmholtz ( kh ) instability .the partial mixing layer extends to radius ( at which ) and contains this kh layer .the linear condition for ability of a layer to resist shear ( stability against mixing ) is the `` gradient '' richardson number , .the brunt - visl frequency is evaluated in the stable region , near the boundary , and may be sensitive to resolution .the shear velocity is , and from this crude estimate .this small length is consistent with the steep `` cliff '' in fig .[ uhi - res_qvsr ] .both terms in ( eq . [ radial_acc0 ] ) act to turn the flow , and are comparable in magnitude . a crude but interestingestimate follows if we take , where the is an average value over .the turning radius in units of local pressure scale height is then which is related to the inverse of a richardson number ; compare to eq .[ ri_gradient ] and [ ri_general ] .both and are negative here , giving a positive ratio . the use of eq .[ mlt_acc ] automatically leads to an approximate richardson number criterion for the edge of the convective region , _ without the need of an additional imposed boundary condition beyond the requirement that becomes small _( see [ imposed_bnd ] ) .the minimum in buoyancy work at corresponds to , the edge of the braking region and the turn " in fig .[ bnd2_fig ] . at buoyancy work becomes positive , so that this corresponds to and the beginning of the driving " region , at which changes sign . _contrary to mlt , the radius , at which the schwarzschild criterion is zero , is not at the boundary of zero convective motion ._ how does this braking region develop a negative buoyancy ?suppose the region to is well mixed , to uniform composition and entropy .there is no braking , so convective flow is unabated to the composition gradient beginning at .vigorous entrainment erodes the boundary , causing a thin layer of partially mixed matter , which contains the heavier nuclei from below the oxygen burning shell .this makes the buoyancy more negative , establishing a braking layer and reducing the rate of entrainment .the braking layer grows until the entrainment rate balances the rate of mixing into the edge of the convection zone .if the braking layer is too large , such mixing will reduce it ; there is negative feedback .the braking layer is thinner than the convective zone , so the time scale is shorter than the turnover time ( [ cascade_re ] ) , and a quasi - steady state can be set up . this simplistic analysis ( which ignores fluctuations ) indicates some of the dynamics involved with the braking layers and composition boundaries .further analysis with the new higher resolution simulations is in progress .this limiting case ( `` elastic collision '' ) is a reasonable approximation for the time averaged behavior of the oxygen burning shell , in which radiative diffusion ( and electron heat conduction ) are slow ; here , while the radiative diffusion time is .a measure of the heat lost during the turn is a small number ( ) for oxygen burning , and is roughly the inverse of the pclet number . even within the narrow braking layer, there is little heat flow by radiative diffusion during oxygen burning .this discussion underestimates mixing because it ignores turbulent fluctuations ( [ fluctuations ] ) ; larger fluctuations do more mixing than average , and mixing is irreversible .turbulent kinetic energies fluctuate by factors , so the mixing estimates should be increased accordingly .flow velocities do not go to zero at the convective boundaries , but become small and oscillatory .as convective plumes hit the boundary , and rebound , the boundary moves in response ; how elastic this is depends upon heat flow ( the pclet number ) .this `` adiabatic '' limit breaks down as the turnover time approaches the radiative diffusion time for the turn . for larger radiation mean - free - paths ,the pclet number decreases .no sharp temperature gradients can persist .this gives an `` inelastic collision '' of the flow with the boundary .this is the case for stars in photon - cooled stages of evolution ; even with relatively large pclet numbers for the whole convective region , the narrow boundary layers may still have significant energy flow by radiative diffusion .the previous discussion of the effect of excess pressure still holds , but because of thermal diffusion becomes increasingly dominated by density excess rather than the temperature excess .the red giant model of provides an example of a boundary layer ( the bottom ) in which there is significant radiative diffusion ; analyze this in detail ( their 4.6 ) .as the boundary is approached from above , the down - flows are accelerated by pressure dilatation .these down - flows have an entropy deficit , so that they are heated by radiative diffusion from the surrounding material . in the braking region, compression causes a hot spot " to develop .the flow is turned to a non - radial direction , and is now cooled by radiative diffusion ( see fig . 7 in ) .such behavior differs from that obtained by present stellar evolution algorithms .the turning of the down - flow forces the mixed region to extend beyond that implied by the schwarzschild criterion , and heating / cooling by radiative diffusion modifies the structure . while modest , such differences can be important for detailed models . in compensation for such changes , a standard solar model requires less opacity to have the same convection zone depth ; this implies a lower metallicity .these changes in the solar model provide a means to reduce the disagreement with helioseismology . gave a justification for compositional smoothing , as did simulations .the thermal characteristics needed follow from the analysis given above , which was not designed for the solar problem , and involved no solar or stellar calibration .a more physically - correct convective boundary condition tends to improve agreement with abundances inferred from 3d stellar atmospheres and the standard solar model .if heat flow processes are included , the `` inelastic collision '' with the boundary allows the loss of heat so that the entropy decreases for the downward flow , enhancing the downward acceleration .this effect tends to drive motion in convective envelopes .heating at the bottom also tends to drive convective flow .however , cooling at the bottom ( as with urca - shells , ) or heating at the top ( downwardly entrained , burning fuel ) both tend to halt the flow .such halting processes can cause convective zones to split . there may be observational evidence supporting this description of boundaries of convection which are deep in stellar interiors .detection of g - mode pulsations in subdwarf b ( sdb ) stars allows an asteroseismic estimation of the size of the he - burning cores , which are significantly larger than predicted by the schwarzschild criterion and standard stellar evolution theory ( see for discussion and references ) .similar issues apparently are general for core helium burning stars observed by kepler .finally , the origin ( ) , in a 1d stellar evolutionary code using mlt , is a boundary as well . the use of eq .[ mlt ] with adequate zoning implies that the convective velocity becomes very small due to symmetry ( derivatives go to zero at the origin ) ._ this is a false braking layer caused by mlt being a local theory . _use of eq .[ gen_acc ] allows flow through the origin provided that a counter flow gives conservation of linear momentum ( e.g. , a * toroidal * roll ) . at the origin in a turbulent convective core , this projects onto 1d as a finite rms velocity , with a zero radial gradient .mlt has problems with velocity at .we have brought more precision to the discussion of stellar convection by the use of 3d simulations of sufficient resolution to exhibit truly turbulent flow and boundary layers .the price paid is that we must replace the unresolved turbulent cascade by kolmogorov theory ( iles approximation ) , and the chaotic behavior of an integral scale roll of lorenz by a steady - state average ._ we use rans averaging to make 3d simulation data concise , and use 3d simulations to give rans closure. _ solution of the rans equations , using only the significant terms , is the full 321d procedure .this approach gives us a quantitative and precise foundation , based upon turbulent solutions of the equations of fluid dynamics .these numerical solutions have numerical limitations , which we have discussed .we find that the actual sub - grid dissipation in our simulations is automatically well approximated by the kolmogorov four - fifths law . as a simpler first step , which addresses some of the worst errors of mlt , we focus on the acceleration equation for the turbulent velocity .this makes the theory non - local , time dependent , and produces boundary layers .it is almost identical to the equation developed from experimental study of the rayleigh - taylor instability ( rti ) , indicating a close connection with plume models of convection ; simulations also suggest this connection directly .further development would entail use of rans analysis to better deal with turbulent fluctuations ( [ fluctuations ] and [ tke_subsection ] ) . even within the framework of the simple acceleration equation ,there are several indications of how current practices in stellar evolution could be improved .the least drastic change involves diffusion : artificial diffusion ( [ eggleton_diff ] ) should be used with caution in situations in which real diffusion ( [ michaud ] ) operates , because of distortion of the gradients which drive real diffusion ( both artificial and real diffusion have second - order spatial derivatives ) .the discussion in [ braking ] gives a more realistic way to treat `` overshooting '' , and at the same time , removes the need for an imposed boundary condition ( schwarzschild , ledoux , or richardson ; [ imposed_bnd ] ) .the fluctuations in pressure discussed in [ braking ] will cause wave motion which will drive mixing in semi - convective regions on a dynamical timescale , far faster than the thermal timescale conventionally used ( e.g. , ; see [ semiconv ] ) . for use in stellar evolutionthis approach requires one more differential equation ( for velocity , in addition to the traditional four , e.g. , , , , and ) and additional coupling terms in the usual stellar evolution differential equations ( turbulent heating in the energy equation , and ram pressure in the hydrostatic equation ) .the additional demand upon computational resources is not large .we use the convective flow velocity and the super - adiabatic excess as separate variables , reflecting the fact that they have different correlation lengths .we check that the simplified dynamic model does capture the numerical results of 3d as expressed in the rans formulation ._ this approach is not calibrated to astronomical data , but predictive , being based on simulations and laboratory experiment .the simple 321d approach includes the kolmogorov - richardson turbulent cascade , and allows connections to past and future numerical simulations as a natural consequence ._ the enormous simplification , from 3d turbulent simulations requiring terabytes of storage down to a single additional ordinary differential equation ( e.g. , eq . [ gen_acc ] ) , means that much is missing . for some applications the missing items may be important .one might use the rans equations directly in a stellar evolutionary code , with 3d simulations to guide closure .we have presented a step toward that goal .alternatively , one might add to the simple 321d as needed , using new models guided by rans results .probably both paths should be followed , given the complexity of the problem .we have refrained from offering detailed algorithms because we believe that there may be a variety of useful ones , tailored for existing stellar evolution codes , and to be modified by developing insight .this is not a finished subject .a skeleton algorithm should include : 1 .velocity from an acceleration equation ( eq . [ gen_acc ] , [ dynamics ] ) , 2 .boundary physics : turning , damping , mixing and shear ( [ braking ] ) , 3 .fluxes of enthalpy and composition ( [ nonuniformy ] and [ braking ] ) , 4 .non - locality in velocity : turbulent kinetic energy flux and ram pressure ( [ dynamics ] ) , and 5 .turbulent heating of background by kolmogorov cascade ( eq . [ kolmog_eps ] ) .our first priority is to implement these ideas in stellar evolution codes .we are currently testing in tycho ( ) , and plan to migrate to mesa , monstar ( ) , genec ( ) , and franec ( ) .we will gladly help with implementations in other codes .new simulations to better quantify the boundary physics are in progress ( ; ) .this approach , unlike mlt , is generalizable in principle to include rotation and mhd because it starts with full 3d equations .for example , rotational terms are implicit in the vector form of eq .[ gen_acc ] ; see also .because of the fundamental importance of convection in stellar evolution theory , a replacement for mlt will have implications for many areas throughout astronomy and astrophysics .a few of the most striking are : convective boundaries with low pclet number will be smoother , which reduces the disagreement between helioseismology and solar model predictions ; see and [ sect3 ] . the corrected boundary conditions for convection will place the composition gradient further beyond the schwarzschild zero condition ( [ braking ] ) , requiring a lower opacity below the mixing boundary to get an acceptable solar model .this may be attained by a lower metallicity , which will reduce the disagreement between solar models , and solar abundances determined from 3d atmospheres ._ the combination of these two corrections will shift the standard solar model problem toward the asplund abundances . _ these modifications beyond mlt bear on many discrepancies between asteroseismology and stellar evolution theory .some examples : application of better convective boundary physics will produce larger he burning cores in sdb stars , and reduce the large discrepancy between the asteroseismology determination of core sizes and stellar models .similar issues apparently are general for core helium burning stars observed by kepler .the discrepancy in mixed modes in normal cheb ( `` red clump '' ) stars will be affected .the nature of convective boundaries is affected by radiative diffusion , so that they differ for neutrino - cooled stages of nuclear burning .calibration of convection for late stages , from stages dominated by photon - cooling , requires re - evaluation .detailed estimates of stellar nucleosynthesis and stellar structure based upon an algorithmic diffusion scenario ( e.g. , ) are not confirmed , and require re - examination .while the general features of nucleosynthesis yields are robust , detailed abundances depend upon details of mixing and convection .nucleosynthesis from lower mass stars is also affected : asymptotic giant branch ( agb ) stars do not have a third dredge up without `` overshoot '' , which is a convective boundary problem .this dredge up is crucial for s - process nucleosynthesis ( it provides a neutron source , ) . driven by neutrino cooling ,nuclear burning in stars prior to core collapse is vigorous , and in turn drives vigorous convection .convective velocities increase as evolution proceeds .the nuclear energy generation is , on average , in balance with the turbulent dissipation at the kolmogorov scale , so , which relates the nuclear energy generation rate , the average convective velocity , and the depth of the convective zone .velocity fluctuations are large .supernova progenitor models which are 1d can represent average properties , such as convective speed , but not the amplitude and phase of the ( large ) fluctuations of those properties .realistic progenitor models should be dynamic and 3d if they are to be used for accurate core collapse simulations .the size and structure of progenitor cores affects the possibility of producing explosions in core collapse simulations .the predicted size and structure of such cores depends upon the physics of convection used in the stellar evolution codes .detailed scenarios for pre - supernova structure , collapse and explosion , such as found in for example , are not robust , and may require revision when better treatments of mixing are applied .the validity of calibrating neutrino cooled convection on photon cooled stages of evolution is questionable due to the large difference in pclet number .even the size of the he core is uncertain with present algorithms , and will be affected by better treatment of convection and convective boundaries . the theoretical approach to turbulence used abovecan also be applied to the core collapse process itself , giving insight even for 3d simulations which are presently under - resolved due to computational limitations .this work was supported in part by nsf 0708871 , 1107445 , nasa nnx08ah19 g at the university of arizona , and by australian research council grants dp1095368 and dp120101815 ( j. lattanzio , p. i. ) at monash university , clayton , australia , and by the european research council through grant erc - adg no .this work used the extreme science and engineering discovery environment ( xsede ) , which is supported by national science foundation grant number oci-1053575 , and made use of ornl / kraken and tacc / stampede .this work was supported in part by resources provided by the pawsey supercomputing centre with funding from the australian government and the government of western australia , and through the national computational infrastructure under the national computational merit allocation scheme .this work was supported in part by the national science foundation under grant no .phys-1066293 and the hospitality of the aspen center for physics .we wish to thank alvio renzini for asking wda ( repeatedly ) why does mlt work ? " , vitorio canuto for helpful hints , and marco limongi , alessando chieffi , norman murray , bill paxton and stan owocki for helpful and encouraging discussions .one of us ( wda ) wishes to thank prof .remo ruffini of icranet , and prof .lars bildsten of the kavli institute of theoretical physics , for their hospitality and support .we wish to thank an anonymous referee for extensive comments which helped improve the paper .we develop the fluid equations in an inertial frame .we begin with a general formulation , and transition to a specifically spherical ( ) choice of coordinates for application to stars. we will decompose variables into a background part and a fluctuating part , e.g. , for pressure .our procedure is chosen for stars in which the background is hydrostatic and spherically symmetric , so that . the vector form of the continuity equation is where is the mass density and is the fluid velocity . in the incompressible limit , for a steady flow , the net flux of mass into a regionequals the mass flux out . in thin boundary layer , perpendicular to the radial direction , the average velocities must satisfy where is either of the symmetric transverse coordinates ( i.e. , locally cartesian ) , to avoid changing the density ( as seen in the eulerian frame ) . show ( their eq .28 ) , that for fluctuations against a steady background , where is the density scale height , and is the radial component of the velocity fluctuation .this approaches zero ( the incompressible limit ) for shallow , subsonic convection ( large density scale height and small radial velocity mach number , , where is the sound speed ) .this velocity `` dilatation '' is due to the vertical motion in the background stratification and becomes an important component in convective driving in deep convection zones .notice that rising plumes ( ) expand and falling plumes contract .the vector acceleration equation ( eq . [ gen_acc ] ) is where is the velocity , with is the kolmogorov damping length , and the variable is defined as in [ dynamics ] .if where is pressure and is gravitational acceleration , then eq . [ nsk ] is a navier - stokes description of the largest scales of turbulence , with a simplified damping term which is consistent with .note that the usual formulation of hydrostatic equilibrium in stellar evolution theory is some variant of the condition .projecting eq .[ nsk ] onto the radial coordinate , we have the full equations in spherical coordinates are shown in 15 , ( see also for a detailed discussion ) , with the bare viscosity terms rather than komogorov s expression for integration of the turbulent cascade . in tensor formthe momentum equation is + { \partial \over \partial x_i } \big ( \zeta { \partial u_l \over \partial x_l } \big ) .\label{tensor - ns}\end{aligned}\ ] ] kolmogorov s four - fifths law states an amazing simplification , that integration over the turbulent cascade reduces the last term in eq .[ tensor - ns ] to ( eq . [ nsk ] ) on average ,ignoring boundary effects ( see [ sect3 ] ) . to illustrate how turning happens at boundaries ,it is sufficient to consider the simpler case of flows with and length scales small compared to , so the transverse dimensions are quasi - cartesian ( the inertial terms in are neglected ; for convective cores , the more cumbersome full equations are needed because can not be large near the origin ) .then the two transverse components are symmetric in this approximation and satisfy where is or .we consider finite fluctuations about a static background , so that we substitute and .we ignore variations in ( the cowling approximation , ) .using , the radial equation becomes convection is often described using only the buoyancy term ; the pressure fluctuations are taken to be small , of order the mach number squared .however , near boundaries the pressure fluctuations provide the tangential acceleration which is necessary to turn the flow , and should not be neglected ( see ) .the buoyancy term acts through the density fluctuation , and only in the direction parallel to the gravity vector .the transverse equation is note that the radial and transverse equations are coupled primarily by the pressure fluctuation term , but also by , because where ( turbulence damps regardless of orientation of the large scale flow ) .the fluctuating pressure near convective boundaries insures the generation of waves .following , 6 , the equation of energy conservation is + t { \partial \rho s \over \partial t } , \label{energy_eq}\ ] ] where is the gravitational potential and .if taken to both the steady state and adiabatic limits , this becomes the bernoulli equation .the entropy change equation may be written as where is the net heating from nuclear and neutrino reactions , is the navier - stokes viscous heating term as modified by kolmogorov s four - fifth s law ( see eq .[ kolmog_eps ] , [ nsk ] and [ tensor - ns ] ) , and is the energy flux due to radiative diffusion .the viscous term is missing from mlt and the euler equation .most of the turbulent kinetic energy resides in the largest ( integral ) scale , while turbulent heating occurs at the small ( kolmogorov ) scale .then is the kolmogorov heating from the turbulent cascade , and , and are now the appropriate rans averages .one requirement for bernoulli s equation to be valid , as assumed in ( see [ pasetto ] ) , is that the rhs of eq .[ entropy_term ] must be zero ( , ch .this is found not to be generally true , either in the 3d simulations , or experimentally in turbulent flows .heating is an essential feature of 3d turbulence , which converts large scale , ordered velocities to disordered ones .canuto , v. m. 2012 , , 528 , a76 canuto , v. m. 2012 , , 528 , a77 canuto , v. m. 2012 , , 528 , a78 canuto , v. m. 2012 , , 528 , a79 canuto , v. m. 2012 , , 528 , a80 castellani , v. , giannone , p. , & renzini , a. , 1971a , , 10 , 340 castellani , v. , giannone , p. , & renzini , a. , 1971b , , 10 , 355 cattaneo , f. , brummel , n. , toomre , j. , malagoli , a. , hurlburt , n. e. , 1991 , , 370 , 282 kuhlen , m. , woosley , s. e. , & glatzmaier , g. , _3d stellar evolution _turcotte , s. , keller , s. c. , & cavallo , r. m. , a.s.p .series 293 kuranz , c. , park , h .- s . ,remington , b. a. , drake , r. p. , miles , a. r. , robey , h. f. , and 20 couthors , 2011 , , 336 , 219 tritton , d. j. , _ physical fluid dynamics _ , 2nd ed . , oxford university press , oxford uk turner , j. s. , 1973 , _ buoyancy effects in fluids _ , cambridge university press , cambridge uk unno , wasaburo , 1961 , , 13 , 276 zahn , j .-, 1991 , , 252 , 179 zahn , j .-, 1992 , , 265 , 115 zeldovich , ya .b. , & razier , yu ., physics of shock wases and high - temperature hydrodynamic phenomena , dover publications , inc . ,mineola , ny
we examine the physical basis for algorithms to replace mixing - length theory ( mlt ) in stellar evolutionary computations . our 321d procedure is based on numerical solutions of the navier - stokes equations . these implicit large eddy simulations ( iles ) are three - dimensional ( 3d ) , time - dependent , and turbulent , including the kolmogorov cascade . we use the reynolds - averaged navier - stokes ( rans ) formulation to make concise the 3d simulation data , and use the 3d simulations to give closure for the rans equations . we further analyze this data set with a simple analytical model , which is non - local and time - dependent , and which contains both mlt and the lorenz convective roll as particular subsets of solutions . a characteristic length ( the damping length ) again emerges in the simulations ; it is determined by an observed balance between ( 1 ) the large - scale driving , and ( 2 ) small - scale damping . the nature of mixing and convective boundaries is analyzed , including dynamic , thermal and compositional effects , and compared to a simple model . we find that ( 1 ) braking regions ( boundary layers in which mixing occurs ) automatically appear _ beyond _ the edges of convection as defined by the schwarzschild criterion , ( 2 ) dynamic ( non - local ) terms imply a non - zero turbulent kinetic energy flux ( unlike mlt ) , ( 3 ) the effects of composition gradients on flow can be comparable to thermal effects , and ( 4 ) convective boundaries in neutrino - cooled stages differ in nature from those in photon - cooled stages ( different pclet numbers ) . the algorithms are based upon iles solutions to the navier - stokes equations , so that , unlike mlt , they do not require any calibration to astronomical systems in order to predict stellar properties . implications for solar abundances , helioseismology , asteroseismology , nucleosynthesis yields , supernova progenitors and core collapse are indicated .
the origin of the genetic code has been a subject of intense research since its structure was completely elucidated in the early 1970 s . in subsequent years , the scientific community has produced several theories in order to explain why the genetic code has this structure . among these theories ,the most prominent ones are the stereochemical theory , the frozen - accident theory and the coevolutionary theory - . roughly speaking, these theories try to account for the structure of the genetic code by looking at the interactions between codons and amino acids , the biosynthetic relationships among different amino acids and how the metabolic pathways between them have been selected throughout evolution .nevertheless , the fact that all the codons are made up of three nucleotides , has mostly been taken for granted and barely brought into question .one of the most widely used arguments found in the literature to explain the trinucleotide codon structure of the genetic code , was given by sidney brenner in 1961 . according to this argument ,codons are made up of three nucleotides ( or bases , for short ) because there are 20 amino acids to be specified by the genetic information expressed by a 4 letter `` alphabet '' ( the four bases a , g , c , u ) .if codons were composed of only two bases , there would be only 16 different combinations ( ) , which are not enough to specify for 20 amino acids .if instead , codons were made up of more than three bases , there would be at least 256 combinations ( ) , and these are too many for only 20 amino acids .hence , less than three bases per codon are not enough , and more than three would imply an excessive degeneration of the code .the result coming out from this argument is that three bases per codon is the optimal `` bit of information '' that can be used in order to specify for the 20 different amino acids by means of a 4 letter `` alphabet '' .the above argument , however , does not constitute an explanation by itself , mainly because it only moves the question of `` why three ? '' to the questions of `` why twenty ? '' or for that matter `` why four ? '' .there is no reason for the genetic information to codify for only 20 amino acids since living organisms use more than those specified by the genetic code .in addition , this argument assumes that all the codons must have the same length ( number of bases ) , even though more efficient codes can be obtained by allowing the length of the codons to vary . finally , _ given _ that 20 amino acids have to be specified by using 4 different bases, brenner s argument leads to the simplest code that might be thought of .but even in such a case , simplicity has to be accounted for as a relevant criterium . in this workwe address the question of the origin of the three - base codon structure of the genetic code from a dynamical point of view .we consider a simple molecular machine model which captures some of the principal features of the interaction between primitive realizations of the ribosome and of the mrna .our main objective is to present a dynamical scenario , compatible with prebiotic conditions , of how the triplet structure of the genetic code could have arisen .the model we propose is a follow up of the one introduced by aldana , cocho and martnez - mekler and is consistent with the current evidence suggesting the `` rna world '' hypothesi1s . in this schemethe crucial molecules involved in the prebiotic and protobiotic processes , that eventually led to codification and translation mechanisms of the genetic information , were rna related.q in our model , based on the setup depicted in fig .1 , a short one - dimensional polymer composed of monomers interacts with a much longer one , via electrostatic forces . in order to avoid confusion , from now onwe will refer to the short polymer as `` the chain '' , and to the long polymer simply as `` the polymer '' .the electrostatic interaction between the chain and the polymer is due to the presence of electric charges , or multipolar moments , in the monomers of both the chain and the polymer .the charges of the monomers of the chain and of the polymer are assigned at random following a uniform distribution .therefore , the resulting chain - polymer interaction potential has a random profile .the chain is allowed to move along the polymer , but is constrained to remain at a fixed perpendicular distance from it .consequently , transport is one - dimensional .one of our main results is to show that under very general conditions , a dynamics is attained in which the chain moves along the polymer in effective `` steps '' whose mean length is three monomers .we argue that this dynamical feature may be one of the underlying causes of the three base codon structure of the genetic code .this paper is organized as follows : section [ themodel ] describes in detail the model and the assumptions introduced . in section [ unichain ] we recall some statistical aspects of our previous analysis of the random interaction potentials between the chain and the polymer for the simplest case in which the former is composed of just one particle ( ) .we exhibit numerically that , even in this simple case , the mean distance between consecutive minima along the interaction potential is very close to three : ( taking the monomer length as spatial unit ) . after retrieving the analytical expression for this distance ,we then look into the probability of two neighboring potential minima being separated by a distance .subindex refers to the number of different types of monomers in the polymer and in the chain .this probability function shows that , even though the mean distance is close to three , the most probable distance between consecutive minima is for . in section [ realseq ]the monomer charges along the polymer are assigned in correspondence with protein - coding regions of the genome of real organisms ( e.g. _ drosophila _ or _e. coli _ ) instead of at random . for this case ,the probability function is modified so that not only the mean distance is , but also the most probable one happens to be . in section [ polychain ]we introduce the more realistic case , which takes into account the fact that the ribosome is not a point particle , that it has spatial structure and presents several simultaneous contact points between its own rrna and the mrna polymer . for small chain lengths ( ) ,the probability distribution is indicative of wide fluctuations and has a form strongly dependent on the particular assignment of charges in the chain .one of our main findings is that for such chains the most likely configurations are those in which both , the mean distance and the most probable one are equal to three , _ even when the monomer charges along the polymer and the chain are assigned at random_. in section [ dynamics ] we analyze the dynamics resulting from the model when an external force is pulling the chain , forcing it to move as a rigid object along the polymer .the power spectrum of the velocity of the chain reveals that , under some very general circumstances , for small chain lengths ( ) , there is a sharp periodicity in the dynamics of the system , with a slowing down of the velocity of the chain every three monomers .finally , section [ summary ] is devoted to the discussion of the results and their relevance to the origin of the genetic code .the model we propose consists of a chain of monomers interacting with a very long polymer composed of monomers , with ( see fig.1 ) .the chain is constrained to remain at a given distance perpendicular to the polymer and is allowed to move in along the polymer , we shall define as its position in this direction relative to the polymer .we will denote the monomer charges in the chain and in the polymer by and , respectively .we should mention that by `` charge '' we do not necessarily mean coulomb charge .both and could be dipolar moments , induced polarizabilities , or similar quantities resulting from electrostatic interactions between chain monomers and polymer monomers with potentials of the form , where characterizes the `` charge '' type .we will assume that all the monomers in the chain , and separately , all the monomers in the polymer , are of the same nature , namely , all of them are either coulomb charges , or dipolar moments , or polarizable molecules , etc . in addition , taking into account that in the origin of life conditions the genetic molecules were not yet likely to convey any structured information , we will consider the charges and to be discrete independent random variables , acquiring one of the different values with the same probability .hence , the probability function for both , the and variables , will be where is the dirac delta function . in general , in this work we will take the values as integers .parameter represents the number of different types of monomers from which the polymer and the chain are made of . for the case of real genetic sequences , but we will not restrict the value of to be 4 . all the monomers will have the same length , which we take as the spatial unit of measure : . we also assume the charge in each of the monomers to be uniformly distributed along the length , so that the charge density in the jth - monomer of the polymer , for example , is a constant whose value is .nevertheless , it is worth mentioning that the dynamics of the model does not depend strongly on the particular shape of the monomer charge density , as long it is a smooth function of ( `` smooth '' in the sense of differentiability ) . with the preceding assumptions ,the interaction potential between the ith - monomer in the chain and the jth - monomer in the polymer is given by ^{\alpha/2 } } \label{vmm}\ ] ] where is a constant whose value depends on the unit system used to measure the physical quantities . in the above expression, has already been set equal to .parameter characterizes the kind of interaction between the chain and the polymer : corresponds to an ion - ion interaction , represents an ion - dipole interaction , and so forth . notethat this parameter does not depend on the indices and , since all the monomers in the chain are of the same nature and those in the polymer are themselves of the same nature , differing from each other only in the value of the charge they contain .the overall interaction potential between the whole chain and the entire polymer is given by the superposition of the individual potentials : equations ( [ vmm ] ) and ( [ vcp ] ) establish the type of random potentials we will be considering .our first aim is to analyze the spatial structure of these potentials , giving their statistical characterization .this will be done in the three following sections .subsequently , we will consider the dynamics of the chain moving along the polymer interacting with it by means of a random potential , subject to an external driving force and seek under what conditions , if any , transport in `` steps '' of three monomers can be achieved .let us start with the simplest case , in which the chain consists of just one monomer .we will refer to this case as the `` single - monomer - chain '' case , and to the chain simply as `` the particle '' .the reason to consider this simple situation is twofold : on one hand , it is useful in order to introduce the relevant ideas behind the model . on the other hand ,it is simple enough as to obtain exact analytical results in a more o less straightforward way . in previous workwe have already analyzed some statistical properties of the random potentials given by expressions ( [ vmm ] ) and ( [ vcp ] ) for the case . after a short review of some of those results we center our attention on the probability distribution .the overall particle - polymer interaction potential is given by ^{\alpha/2}}\ ] ] note that in the previous expression we have set , the only charge in the chain , equal to one .fig.2 shows three graphs of the potential for and different values of the parameter . to generate these graphs , the following probability function for the charges used : namely , each one of the variables acquired one of the six different values with probability ( ) . in fig.2 , the random realization of the charges along the polymer was the same for the three graphs .as can be seen from this figure , the distribution of maxima and minima along the potential does not change by varying the value of the parameter , in the sense that all the maxima and minima remain essentially at the same positions .what occurs as takes larger and larger values is that the potential becomes a step - like function .fig.3 presents an analogous situation , but now keeping constant ( ) and varying .the behavior of the potential is similar to the previous case : the potential becomes a step - like function as decreases and the positions of the maxima and minima are not appreciably modified .the above considerations exhibit that for small values of , say , the distribution of maxima and minima along the potential is entirely determined by the distribution of charges along the polymer and is independent of the particular values acquired by the parameters and .therefore , in order to find out the distribution of maxima and minima along the interaction potential , it is possible to substitute the continuous random potential given by expressions ( [ vmm ] ) and ( [ vcp ] ) , by the equivalent step - like potential defined by \label{vesc}\ ] ] where is the heaviside function is defined as if and if , and is a random variable whose value is directly proportional to the charge of the jth - monomer in the polymer : where is the charge of the particle ( the only monomer in the chain ) . since the random charges are statistically independent , so are the .expression ( [ vesc ] ) , which we will refer to as the * step - like limit * , is suitable for the analytical determination of the probability function of the distances between consecutive potential minima .this probability function gives important information concerning the dynamics of the system .if some external force is acting on the particle ( or the chain ) , forcing it to move in one direction ( right or left ) along the polymer , the particle will spend more time in the energy minima than in the maxima .such a movement may be interpreted by considering the particle as `` jumping '' from one minimum to the next ( see fig.4 ) .it is worth noticing that the mean distance between consecutive minima in the potentials shown in fig.2 and fig.3 is nearly three . in fig.2there are 33 minima distributed among 100 monomers , and consequently the mean distance between consecutive minima in this case is .analogously , the mean distance between neighboring minima in fig.3 is .therefore , it is expected that in its motion along the polymer , the velocity of the particle will slow down , on average , every three monomers , being momentarily `` trapped '' in each of the potential minima . by using the step - like limit , in reference we have shown that the mean distance between consecutive potential minima for a long polymer ( the large n limit ) is given by where is the number of different monomer types .the above equation shows that the mean distance is always between 3 and 4 , and approaches 3 asymptotically as .in particular , for ( the biological value ) we have . in order to characterize the fluctuations around the mean distance , it is useful to compute the probability distribution function , which we recall , gives the probability of two consecutive minima being separated by a distance when there are different types of monomers . in the step - like limit ,this computation is carried out by counting all the configurations of the step - like variables in which there are two minima , one at and the other at , with no other minima in between .the situation is illustrated in fig.5a . since in the step - like limit the interaction potentialis constant along every monomer , we will adopt the convention to measure the distance between two adjacent minima from the mid point of the first minimum to the mid point of the second one , as illustrated in fig.5b . with the above convention, the resulting distances can only acquire integer or half - integer values . for a finite number of different charges ,the explicit calculation of the probability function consists mainly on counting configurations , though conceptually straightforward , it involves a considerable amount of algebra .here we present the final expressions : if is integer : , and if is half - integer : , where $ ] . in the above expressions, is a polynomial whose degree and coefficients depend on . for , and the polynomialsare given by \nonumber\end{aligned}\ ] ] for the case of we have also derived a closed expression , which has a much simpler form : the preceding distributions are plotted in fig.6 .it can be seen from this figure that the most probable distance between consecutive potential minima is , except for the case in which .hence , according to the transport mechanism suggested in fig.4 , whenever there are more than two different types of monomers , the particle will move along the polymer in `` jumps '' whose mean length is close to three , but whose most probable length is actually two .the difference between the mean distance and the most probable distance is due to the presence of `` tails '' in the probability function .namely , to the fact that has non zero values even for large . nevertheless , in section [ polychain ] we will show that these `` tails '' can be shrunk almost to zero when the chain is made up of more than one particle ( ) .this is one of the main results of this paper . to end this section ,it is worth mentioning that half - integer distances between two neighboring minima occur when one or both of these minima extend over several monomers ( see fig.5b ) . in these configurations ,the charges of the adjacent monomers constituting the extended minimum have the same value .configurations in which groups of adjacent equally charged monomers occur , are less likely than configurations in which adjacent monomers have different charges , and the former tend to disappear as increases ( see fig.6 ) .the charges along the polymer can be assigned in correspondence with the genetic sequence of an organism , rather than in a random way .the purpose of doing so is to find out how the potential minima and maxima along real genetic sequences are distributed , and to compare the resulting distribution with the one corresponding to the random case . sincegenetic sequences are made out of four different bases ( a , u , c and g ) we consider four different possible values for the charges , i.e. . to proceed further ,it is necessary to establish a correspondence between the charge values and the four bases a , u , c and g. an arbitrary look up table is the following : with the above correspondence , if the jth - base in a given genetic sequence happens to be a , for example , then the charge of the corresponding jth - monomer in the polymer will be . figure fig.7a shows the probability distribution computed numerically by using a _ drosophila melanogaster _ protein - coding sequence , 45500 bases in length (several genes were concatenated to construct this sequence ) .the mean distance between consecutive potential minima for this sequence is and , as follows from the figure , the most probable distance is .therefore , in the `` real sequence case '' not only is the mean distance very close to 3 , but also the most probable distance turns out to be 3 .a comparison of fig.6c with fig.7a , shows that for protein - coding genetic sequences , the potential minima along the polymer are more often separated by three monomers than in the random case .when protein - coding sequences are used , the value of increases at and decreases at .the above behavior does not occur when non - coding sequences of the genome are used for the monomer charge assignment along the polymer .fig.7b shows the probability function for the case in which the monomer charges are in correspondence with an intergenic sequence of the _ drosophila s _ genome .the length of the sequence is again 45500 bases . as can be seen from the figure , in this case the probability function looks much more like the one obtained in the random case .it is important to remark that the behavior of exhibited in fig.7a for real protein - coding sequences does not depend on the particular correspondence ( [ corres ] ) between bases and charge values being used , as long as they are of similar order of magnitude and they can allow for an order relation .these conditions hold for the four bases a , t , c and g which have charges of the same order of magnitude appearing in expression ( [ vmm])]and are fulfilled by our present choice of .the order relation is necessary for the interaction potential to have maxima and minima .it is worth asking the effect that changes in the order relation have on the probability function .there are 24 possible order relations among the four bases a , t , c , and g ( permutations ) .the 24 probability functions corresponding to these permutations are plotted in fig.8a for _ drosophila s _ protein - coding sequence . as can be seen from the figure , the probability functions basically overlap , independently of the particular order relation between the bases , with a peak at .the invariance of under base permutations also holds for non - coding sequences as is shown in fig .8.b . , where for intergenic sequences of drosophila _drosophilas_. this value of suggests that non - coding sequences behave as random structures .0.3 in the `` peaking at '' of the probability function seems to be a general characteristic associated with the protein - coding sequences of living organisms , not only with _drosophila_. in fig.9 we show the probability functions obtained from protein - coding sequences of different organisms , and in all the cases the probability functions present their highest value at ( the mean distance is also very close to ) .the fact that the above characteristic is absent in non - coding genetic sequences may be interpreted in evolutionary terms . genetic sequences directly involved in the protein - translation processes were selected ( among other things ) as to bring the distance between consecutive potential minima closer to 3 , both in mean and frequency of occurrence .this interpretation raises a question : how likely is it to obtain a randomly generated sequence with a structure similar to that of protein - coding sequences ?in other words , if we generate a random sequence and compute its probability function , how likely is it to come up with a probability function peaking at ? .in order to answer this question , we define the parameters and as 0.3 in if the probability function associated with a given sequence has its highest value at , then the corresponding parameters and will both be greater than one . otherwise , one or both of these parameters will be smaller than one .fig.10a is a plot of vs. for 1000 random sequences , each one consisting of 500 bases ( which is a typical length of sequences coding functional proteins ) .it can be seen from the figure that only a small fraction of the points ( about 0.224 ) fall in region i , for which and .the rest of the points fall in region ii , in which and .therefore , the probability of having a random sequence , 500 bases long , whose consecutive potential minima are more often separated by a distance , is close to . on the other hand ,figs.10b - f show similar graphs , but using protein - coding sequences of real organisms .these graphs were constructed by analyzing short coding sequences 500 bases in length .the fraction of points falling in region i ( and ) for the different organisms of fig.10 is summarized in the following table : [ cols="<,^ " , ] therefore , when a collective interaction between the polymer and the chain prevails , a remarkable property arises : the probability of having a random interaction potential , whose consecutive minima are more often separated by three monomers , is the largest .recent experimental evidence suggests that the ribosome - mrna system presents a ratchet - like behavior in the protein synthesis translocation process . in this view , the ribosome is tightly attached to the mrna thread in the absence of gtp .this is so because the channel in the ribosome through which the mrna passes , is more or less closed .when a gtp molecule is supplied ( and transformed into gdp ) , this channel opens leaving the mrna thread free to move one codon .subsequently the mrna passage in the ribosome closes again , trapping the mrna molecule . in this clamping mechanismseveral physicochemical factors are involved , which if taken into account in detail would lead to complex dynamical equations hard to handle . in this workour approach is to look into the behavior of oversimplified molecular models which might capture some of the essential dynamical features of the system and may shed some light on how this mechanism could have arisen in the origin of life conditions . in our modelling the dynamics of the system is governed by the application of an external force to the chain in the horizontal direction , i.e. parallel to the polymer . by this means the chain will be forced to move along the polymer . in principle , the force may be time dependent , but we will restrict ourselves to a constant term . this force might come from a chemical pump ( like gdp ) or from any other electromagnetic force present in prebiotic conditions .the only purpose of this force in our model is to drive the chain along the polymer ( which is assumed to be fixed , limit ) , avoiding it from getting trapped in some of the minima of the polymer - chain interaction potential .therefore , we will also assume that satisfies max .our analysis relies on newton s equation of motion in a high friction regime , where inertial effects can be neglected .this regime actually exists in biological molecular ratchets similar to the one we are considering . under such conditions ,the newton s equation of motion acquires the form where is the friction coefficient . in what follows, we will set , which is equivalent to setting the measure of the time unit .the above , though a deterministic equation , gives rise to a random dynamics due to the randomness of the interaction potential . in order to start analyzing this random dynamics ,let us consider first the single - monomer - chain case . in this case , as before, we will refer to the chain simply as `` the particle '' .in fig.14a we show a typical realization of the velocity of the particle as a function of its position along the polymer .this graph was constructed by solving numerically the equation of motion ( [ newton ] ) , using the fourth order runge - kuta method .the parameter values used were and , and the monomer charge values were ( case ) .fig.14b shows the local - transit times of the particle along a short segment of the polymer ( 40 monomers in length ) .this transit time is represented in arbitrary units , and was computed by counting how many time steps the particle spent in every spatial interval throughout the polymer . in the graph shown , the value of was .it is apparent from this figure that , in its way along the polymer , the particle spends more time in certain regions than in others , the former being more or less regularly spaced along the polymer . in order to find out the spatial regularities in the dynamics of the system , it is convenient to take the fourier transform of the velocity of the particle .let us call the fourier transform of , being the fourier variable conjugate to .fig.15 shows the fourier power spectrum of the velocity , , for two different realizations of monomer charges in the polymer .the parameter values in fig.15a and fig.15b are and respectively .these graphs were computed for the case , using the charge values . from the figure ,it is evident that there exists a dominant frequency in the power spectrum of the velocity ( the highest peak ) , whose corresponding spatial periodicity is .the power spectrum reveals a dynamical regularity in the motion of the particle throughout the polymer .this regularity is inherited from the one present in the random potential , in the sense that the particle spends more time in the minima than in the maxima .the consequence is a slowing down of the velocity nearly every three monomers , which is reflected in the power spectrum .our interpretation is that the peak occurring in the power spectrum of the velocity conveys the information on the average distance between consecutive potential minima , which for the case is . as in section [ realseq ], we can assign the charges along the polymer in correspondence with the genetic sequence of real organisms . in order to do that, we will use the same base - charge correspondence as in expression ( [ corres ] ) .the objective is to find out how the dynamics of the system changes when using real genetic sequences instead of random ones . inwhat follows , the value of the parameters and will be , .the power spectrum of the velocity of the particle throughout the polymer , when using protein - coding sequences of different organisms , is shown in fig.16 . to generate these graphs , short coding sequences of several organisms , each 10000 monomers in length , were used .two points are worth noticing in this figure .first , the peak in the power spectrum is much higher than in the random case .this indicates that there is a much more well defined periodicity in the dynamics generated by interaction potentials when protein - coding sequences are used .second , the spatial periodicity reflected in the peak of the power spectrum is much closer to than in the random case .this dynamical behavior is not present when real but non - coding sequences are used .for example , in fig.17 we show the power spectrum of the velocity of the particle along the polymer , for two cases in which the monomer charges in the polymer were assigned in correspondence with intergenic regions of two organisms . as can be seen , the structure of such spectra is similar to the one obtained in the random case . in this sense ,intergenic regions again seem to have a random structure .the fact that the power spectrum corresponding to protein - coding sequences exhibits a very sharp periodicity at , whereas the one corresponding to non - coding sequences does not , has already been reported in the literature . nonetheless , in these previous worksthe power spectrum of the `` bare '' genetic sequences is analyzed , namely , without considering any kind of interaction potential or dynamical behavior .what we have shown here , though , is that this `` structural '' periodicity around three transforms into a _dynamical periodicity _ in the motion of the particle along the polymer .the most interesting dynamics occurs when an extended chain is interacting with the polymer .in such a situation , a collective interaction prevails . at every momentthere are several contact points between the chain and the polymer .as we have already pointed out , collective interaction between the chain and the polymer gives rise to a widely fluctuating probability function .the same occurs with the power spectrum of the velocity of the chain along the polymer .however , these fluctuations , far from being annoying , produce a much richer dynamical behavior than in the single - monomer - chain case . in fig.18we show the power spectra of the velocity of the chain along the polymer for two different random realizations of monomer charges in the chain .the charges in the polymer were the same in both cases .these graphs were constructed with a polymer 500 monomers in length and a 10-monomer chain .the parameter values used were and .also , the charge values were , as above . from the figure , it is apparent that the power spectrum of the velocity exhibits a very well defined dominant frequency , _ even though the charges in both the polymer and the chain were assigned at random_. the power spectrum in fig.18a presents a dominant frequency corresponding to a spatial periodicity , whereas the corresponding periodicity in the power spectrum shown in fig.18b is .we should explain this difference .the power spectrum appearing in fig.18a was constructed by using a polymer and a chain whose associated probability function has the same shape as the one of fig.12a .namely , for this system the probability function has a very high value at .on the other hand , the power spectrum in fig.18b corresponds to a system whose probability function has a very sharp peak at , as the one shown in fig.12b . from our numerical simulations, we can conclude that _ whenever the probability function has a sharp maximum at a distance , the power spectrum of the velocity also presents a sharp peak corresponding to a spatial periodicity . _as we have seen , in the collective - interaction case the most probable configurations are those in which the probability function has its highest value at ( see fig.13 ) .therefore , if we assign at random the monomer charges in the polymer and in the chain , with high probability we will come up with a dynamics possessing a very well defined periodicity : the chain will move along the polymer in `` jumps '' whose length is nearly three monomers .the results presented throughout this work suggest a possible scenario for the origin of the three base codon structure of the genetic code . in this scenario ,primitive one dimensional molecular machines , initially with a random structure , exhibited a regular dynamics with a `` preference '' for a movement in steps of three bases . by `` steps '' we mean a slowing down of the velocity of the chain along the polymer , nearly every three monomers ( see fig.14b ) . even in the simplest case in which the chain consists of only one monomer, the above dynamical regularity is apparent .we can think of the dynamics of primitive molecular machines as being `` biased towards three '' .the preceding property is quite robust inasmuch as it hardly depends on the particular kind of interaction between the polymer and the chain .on one hand , the kind of electrostatic potentials we have used is representative of the actual interaction potentials between particles occurring in nature .these potentials are characterized in our model by the parameter .we have also seen that the distribution distances between neighboring maxima and minima along the interaction potential , characterized by the probability function , does not depend on this parameter ( for small values of ) , i.e. it will be the same whether the interaction is coulombian or dipolar or of any other ( electrostatic ) type .on the other hand , the spatial distribution of interaction potential minima also does not depend on the particular values of the monomer charges and , as long as these values are of the same order of magnitude. an important feature that the charges must comply with is that they take more than two different values .this allows for an order relation to be established among the different types of monomers , leading to a maxima and minima structure of the interaction potential .the probability function , which gives the probability of two consecutive minima being separated by a distance , only depends on , namely , on the number of different types of monomers . as increases , the mean distance between consecutive potential minima approaches threenevertheless , in the particle ( single - monomer - chain ) case , the most probable distance is ( for ) . still in this particle case ,considerable changes take place when the charges along the polymer are assigned in correspondence with protein - coding genetic sequences of real organisms . in this case , not only is the mean distance between neighboring potential minima nearly three , but also the most probable one , , happens to be three .this is a remarkable property of protein - coding sequences , perhaps acquired throughout evolution .furhtermore the fact that this `` refinement '' is absent in non - coding sequences of real organisms , strongly suggests that it is a consequence of the dynamical processes involved in the protein synthesis mechanisms .this interpretation is supported by the results obtained when the _ dynamics _ of the particle moving along the polymer is considered . in the random sequence case , there are dominant frequencies in the power spectrum of the particle velocity related with the spatial regularities of the interaction potential .moreover in the protein - coding sequence case , the power spectrum of the velocity shows a very well defined periodicity corresponding almost exactly to a spatial distance .again , this behavior does not occur for non - coding sequences of real organisms , which are not involved in the translation processes .a richer dynamics emerges when the chain is composed of several monomers . in this ,more realistic , collective - interaction case , the probability function presents very wide fluctuations , depending on the particular assignment of monomer charges in the chain .nevertheless , the most probable configurations are those for which the probability function has its highest value at . for these configurations, the power spectrum of the chain velocity along the polymer exhibits a very well defined spatial periodicity at .our results suggest an origin of life scenario in which primordial molecular machines of chains moving along polymers in quasi one - dimensional geometries , that eventually led to the protein synthesis processes , were biased towards a dynamics favoring the motion in `` steps '' or `` jumps '' of three monomers .the higher likelyhood of these primitive``ribosomes '' may have led to the present ribosomal dynamics where mrna moves along rrna in a channel conformed by the ribosome .dynamics may have acted in this sense as one of the evolutionary filter favoring the three base codon composition of the genetic code . 0.5 in * acknowledgements *+ we would like to thank leo kadanoff , sue coppersmith , haim diamant and cristian huepe for very useful discussions and corrections .this work was sponsored by the dgapa - unam project in103300 , the mrsec program of the national science foundation ( nsf ) under award number dmr 9808595 and by the nsf program dmr 0094569 .m. aldana also acknowledges conacyt - mxico a posdoctoral grant .martnez - mekler g , aldana m , cocho g ( 1999 ) on the role of molecular machines in the origin of the genetic code , in `` statistical mechanics of biocomplexity : proceedings of the xv sitges conference , held at sitges , barcelona , spain 8 - 12 june 1998 '' , editors reguera d , vilar jmg , rubi jm , ( springer verlag lecture notes in physics 527:112 - 123 ) .
we address the question , related with the origin of the genetic code , of why are there three bases per codon in the translation to protein process . as a followup to our previous work , we approach this problem by considering the translocation properties of primitive molecular machines , which capture basic features of ribosomal / messenger rna interactions , while operating under prebiotic conditions . our model consists of a short one - dimensional chain of charged particles(rrna antecedent ) interacting with a polymer ( mrna antecedent ) via electrostatic forces . the chain is subject to external forcing that causes it to move along the polymer which is fixed in a quasi one dimensional geometry . our numerical and analytic studies of statistical properties of random chain / polymer potentials suggest that , under very general conditions , a dynamics is attained in which the chain moves along the polymer in steps of three monomers . by adjusting the model in order to consider present day genetic sequences , we show that the above property is enhanced for coding regions . intergenic sequences display a behavior closer to the random situation . we argue that this dynamical property could be one of the underlying causes for the three base codon structure of the genetic code _ james franck institute , the university of chicago 5640 south ellis avenue , chicago , il , 60637 , us . + de fsica , unam . apdo . postal 20 - 364 , 01000 mxico d.f . , mxico . + de ciencias fsicas , unam . apdo . postal 48 - 3 , 62251 cuernavaca , morelos , mxico . _ _ submitted to the journal of theoretical biology . + november 2001 .
additive manufacturing , or 3d printing , refers to a class of technology for the direct fabrication of physical products from 3d computer - aided design ( cad ) models .in contrast to material removal processes in traditional machining , the printing process adds material layer by layer .this enables direct printing of geometrically complex products without affecting building efficiency .no extra effort is necessary for molding construction or fixture tooling design , making 3d printing a promising manufacturing technique [ ] . despite these promising features , accurate control of a product s printed dimensionsremains a major bottleneck .material solidification during layer formation leads to product deformation , or shrinkage [ ] , which reduces the utility of printed products . shrinkage control is crucial to overcome the accuracy barrier in 3d printing .to control detailed features along the boundary of a printed product , and used polynomial regression models to first analyze shrinkage in different directions separately , and then compensate for product deformation by changing the original cad accordingly .unfortunately , their predictions are independent of the product s geometry , which is not consistent with the physical manufacturing process . built on this work , establishing a generic , physically consistent approach to model andpredict product deformations , and to derive compensation plans .the essence of this new modeling approach is to transform in - plane geometric errors from the cartesian coordinate system into a functional profile defined on the polar coordinate system .this representation decouples the geometric shape complexity from the deformation modeling , and a generic formulation of shape deformation can thus be achieved .the approach was developed for a stereolithography process , and in experiments achieved an improvement of one order of magnitude in reduction of deformation for cylinder products .however , an important issue not yet addressed in the previously cited work on deformation control for 3d printing is how the application of compensation to one section of a product will affect the deformation of its neighbors .compensation plans are always discretized according to the tolerance of the 3d printer , in the sense that sections of the cad are altered by single amounts , for example , as in figure [ compensationexample ] . furthermore ,when planning an experiment to assess the effect of compensation on product deformation , it is natural to discretize the quantitative `` compensation '' factor into a finite number of levels , which also leads to a product having a more complex boundary. ultimately , such changes may introduce interference between different sections of the printed product , which is defined to occur when one section s deformation depends not only on its assigned compensation , but also on compensations assigned to its neighbors [ ] .for example , in figure [ compensationexample ] , the deformation for points near the boundary of two neighboring sections should depend on compensations applied to both . by the same logic , interference becomes a practical issue when printing products with complex geometry .therefore , to improve quality control in 3d printing , it is important to formally investigate complications introduced by the interference that results from discretization in compensation plans .we take the first step with an experiment involving a discretized compensation plan for a simple shape .we begin in section [ sec2 ] with a review of interference , models for product deformation , and the effect of compensation .adoption of the rubin causal model [ rcm , ] is a significant and novel feature of our investigation , and facilitates the study of interference .section [ secnocompensationfit ] summarizes the basic model and analysis for deformation of cylinders given by .our analyses are in sections [ secexperimentaldesign][secrefinedmodelinterference ] : we first describe an experiment hypothesized to generate interference , then proceed with posterior predictive checks to demonstrate the existence of interference , and finally conclude with a model that captures interference . a statistically substantial idea in section [ secassessinginterference ]is that , in experiments with distinct units of analysis and units of interpretation [ , pages 1819 ] , the posterior distribution of model parameters , based on `` benchmark '' data , yields a simple assessment and inference for interference in the experiment , similar to that suggested by and . analyses in sections [ secsimplemodelinterference][secrefinedmodelinterference ] demonstrate how discretized compensation plans complicate quality control through the of interference .this illustrates the fact that in complex manufacturing processes , a proper definition of experimental units and understanding of interference are critical to quality control .we use the generalframework for product deformation given by [ ( ) , pages 36 ] . suppose a product has intended shape and observed shape under a 3d printing process .deformation is informally described as the difference between and , where we can represent both either in the cartesian coordinate system or cylindrical coordinate system .cylindrical coordinates facilitate deformation modeling and are used throughout . for illustrative purposes ,we define terms for two - dimensional products ( notation for three dimensions follows immediately ) .quality control requires an understanding of deformation in different regions of the product that receive different amounts of compensation .we therefore define a finite number of points on the boundary of the product , corresponding to specific angles , as the experimental units .the desired boundary from the cad model is defined by the function , denoting the nominal radius at angle .we consider only one ( quantitative ) treatment factor , compensation to the cad , defined as a change in the nominal radius of the cad by units at for .compensation is not restricted to be nonnegative .the potential radius for under compensation to is a function of , , and , denoted by .the difference between the potential and nominal radius at defines deformation , and so is defined as our potential outcome for .potential outcomes are viewed as fixed numbers , with randomness introduced in section [ secmodeling ] in our general model for the potential outcomes .this definition of the potential outcome is convenient for visualizing shrinkage .for example , suppose the desired shape of the product is the solid line , and the manufactured product when is the dashed line , in figure [ deformationcurve](a ) .plotting the deformation at each angle yields a visualization amenable to analysis [ figure [ deformationcurve](b ) ] .orientation is fixed : we match the coordinate axes of the printed product with those of the cad model .a unit is said to be affected by interference if for at least one pair of distinct treatment vectors with [ ] .if there is no interference , then is a function of only via the component . as the experimental units reside on a connected boundary, the deformation of one unit may depend on compensations assigned to its neighbors when the compensation plan is discretized .perhaps less plausible , but equally serious , is the possible leakage of assigned compensations across units .these considerations explain the presence of the vector , containing compensations for all units , in the potential outcome notation ( [ eqpotentialoutcomes ] ) .practically , accommodations made for interference should reduce bias in compensation plans for complex products and improve quality control . following [ ( ) , pages 68 ] , our potential outcome model under compensation plan is decomposed into three components : function represents average deformation of a given nominal shape independent of location , and is the additional location - dependent deformation , geometrically and physically related to the cad model .we can also interpret as a low - order component and as a high - order component of deformation .the are random variables representing high - frequency components that add on to the main trend , with expectation and for all . figure [ deformationcurve ] demonstrates model ( [ eqdecomp1 ] ) . in this example , , so is a function of , and .decomposition of deformation into lower and higher order terms yields where , and are coefficients of a fourier series expansion of .the terms with large represent the product s surface roughness , which is not of primary interest . under the polar coordinate system, a compensation of units at can be thought of as an extension of the product s radius by units in that direction .bearing this in mind , we first follow [ ( ) , page 8 ] to extend ( [ eqdecomp1 ] ) to accommodate compensations , and then build upon this to give an extension that can help capture interference resulting from discretized compensation plans .let denote the potential radius for under compensation of units to all points .compensation is equivalent , in terms of the final manufactured product , as if a cad model with nominal radius and compensation was initially submitted to the 3d printer .then \\[-8pt ] \nonumber & = & \delta r\bigl(\theta_i , r_0(\cdot ) + x_i , \mathbf{0}\bigr),\end{aligned}\ ] ] where follows the same form as ( [ eqdecomp1 ] ) , abbreviated as consequently , the potential outcome for is \\[-8pt ] & = & \delta r\bigl(\theta_i , r_0(\cdot ) + x_i , \mathbf{0}\bigr ) + x_i \nonumber \\ & = & \mathbb{e } \bigl\ { \delta r\bigl(\theta_i , r_0 ( \cdot ) + x_i , \mathbf{0}\bigr ) \bigr\ } + x_i + \varepsilon_i.\nonumber\end{aligned}\ ] ] the last two steps follow from ( [ eqintstep1 ] ) and ( [ eqintstep2 ] ) , respectively .if is small relative to , then ( [ eqmodelcomp2 ] ) can be approximated using the first and second terms of the taylor expansion of at : {x = 0 } + x_i + \varepsilon _ { i } \\ & = & \delta r\bigl(\theta_i , r_0(\cdot),\mathbf{0}\bigr ) + \bigl\ { 1 + h\bigl(\theta_i , r_0(\cdot ) , \mathbf{0 } \bigr)\bigr\ } x_i , \nonumber\end{aligned}\ ] ] where {x = 0}$ ] . under a specified parametric model for the potential outcomes ,this taylor expansion is performed conditional on the model parameters . when there is no interference , for any , and so ( [ eqcomptaylor ] ) is a model for compensation effects in this case .we can generalize this model to incorporate interference in a simple manner for a compensation plan with different units assigned different compensations .as all units are connected on the boundary of the product , unit s treatment effect will change due to interference from its neighbors , so that will deform not just according to its assigned compensation , but instead according to a compensation .thus , we generalize ( [ eqcomptaylor ] ) to where the _ effective treatment _ is a function of and assigned compensations for neighbors of ( with the definition of neighboring units naturally dependent on the specific product ) , hence potentially a function of the entire vector . allowing the treatment effect for to depend on treatments assigned to its neighboring unitseffectively incorporates interference in a meaningful manner , as will be seen in the analysis of our experiment .huang et al . [ ( ) , page 12 ] constructed four cylinders with , and inches , and used , and equally - spaced units from each . based on the logic in section [ secmodeling ] , they fitted to the data , with independently , and parameters , and independent of .specifically , for the cylinder , the location - independent term is thought to be proportional to , so that with overexposure of units it would be of the form . furthermore , the location - dependent term is thought to be a harmonic function of , and also proportional to , of the form with overexposure .independent errors are used throughout because the focus is on a correct specification of the mean trend in deformation ( appendix [ seccorrelation ] contains a discussion on this point ) . specified and placed flat priors on , and , with all parameters independent a priori .posterior draws of the parameters were obtained by hamiltonian monte carlo [ hmc , ] and are summarized in table [ posteriorpredictivetable ] , with convergence diagnostics discussed in appendix [ secdiagnostics ] .a simple comparison of the posterior predictive distribution of product deformation to the observed data [ , page 19 ] demonstrates the good fit , and so we proceed with this specification and parameter inferences to design and analyze an experiment for interference ..9d1.9d2.9d3.15c@ & & & & & + & -1.34 10 ^ -2 & 1.6 10 ^ -4 & -1.34 10 ^ -2 & ( -1.37 , -1.31 ) 10 ^ -2 & + & 5.7 10 ^ -3 & 3.1 10 ^ -5 & 5.71 10 ^ -3 & ( 5.65 , 5.8 ) 10 ^ -3 & + & 8.61 10 ^ -1 & 7.33 10 ^ -3 & 8.61 10 ^ -1 & ( 8.47 , 8.75 ) 10 ^ -1 & + & 1.13 & 5.46 10 ^ -3 & 1.13 & ( 1.12 , 1.14 ) & + & 8.79 10 ^ -3 & 1.5 10 ^ -4 & 8.79 10 ^ -3 & ( 8.5 , 9.07 ) 10 ^ -3 & + & 8.7 10 ^ -4 & 1.18 10 ^ -5 & 8.7 10 ^ -4 & ( 8.5 , 8.9 ) 10 ^ -4 & + substituting from ( [ eqnocompmodel ] ) into the general model ( [ eqmodelcomp2 ] ) , we have \\[-8pt ] \nonumber & & \qquad = x_0 + x_i + \alpha(r_0 + x_0 + x_i)^a + \beta(r_0 + x_0 + x_i)^b \cos(2 \theta_i ) + \varepsilon_{i}.\end{aligned}\ ] ] the taylor expansion at , as in ( [ eqcomptaylor ] ) , yields the model we incorporate interference for a plan with different units assigned different compensations by changing in the right side of ( [ eqmodelcompcylinder ] ) to , with the functional form of derived by exploratory means in section [ secassessinginterference ] . under a discretized compensation plan , the boundary of a product is divided into sections , with all points in one section assigned the same compensation . in the terminology of [ ( ) , pages 1819 ] , these sections constitute units of analysis , and individual angles are units of interpretation .we expect interference for angles near neighboring sections .interference should be substantial for a large difference in neighboring compensations , and negligible otherwise .this reasoning led to the following restricted latin square design to study interference .we apply compensations to four cylinders of radius , and inches , with each cylinder divided into equal - sized sections of radians .one unit of compensation is , and inch for each respective cylinder , and there are only four possible levels of compensation , , and units .two blocking factors are considered .the first is the quadrant and the second is the `` symmetry group '' consisting of -radian sections that are reflections about the coordinate axes from each other .symmetric sections form a meaningful block : if compensation is applied to all units , then we have from ( [ eqmodelcompcylinder ] ) that for , suggesting a need to control for this symmetry in the experiment .thus , for each product , we conceive of the sections as a table , with symmetry groups forming the column blocking factor and quadrants the row blocking factor .based on prior concerns about the possible severity of interference and resulting scope of inference from our model ( [ eqcomptaylor ] ) , the set of possible designs was restricted to latin squares ( each compensation level occurs only once in any quadrant and symmetry group ) , where the absolute difference in assigned treatments between two neighboring sections does not exceed two levels of compensation . each productwas randomly assigned one design from this set , with no further restriction that all the products have the same design .our restricted latin square design forms a discretized compensation plan that blocks on two factors suggested by the previous deformation model , and remains model - robust to a certain extent .the chosen experimental designs are in figure [ design ] , and observed deformations for the manufactured products are in figure [ experimentaldata ] .there are , and equally spaced angles considered for the four cylinders . our first task is to assess which units have negligible interference in the experiment .to do so , we use the suggestions of and , who describe when interest exists in comparing a treatment assignment to a baseline .we have in section [ secnocompensationfit ] data on cylinders that receive no compensation ( denoted by ) and a model ( [ eqnocompmodel ] ) that provides a good fit .furthermore , we have a hypothesized model ( [ eqmodelcompcylinder ] ) for compensation effects when interference is negligible , which is a function of parameters in ( [ eqnocompmodel ] ) .if the manufacturing process is in control , posterior inferences based on then yield , by ( [ eqmodelcompcylinder ] ) , predictions for the experiment . in the absence of any other information , units in the experiment with observed deformations deviating strongly from their predictions can be argued to have substantial interference .after all , if has negligible interference under assignment , then this suggests the following procedure to assess interference : calculate the posterior distribution of the parameters conditional on , denoted by . for every angle in the four cylinders , form the posterior predictive distribution of the potential outcome corresponding to the observed treatment assignment ( figure [ design ] ) using model ( [ eqmodelcompcylinder ] ) and .compare the posterior predictive distributions to the observed deformations in the experiment . *if a unit s observed outcome falls within the central posterior predictive interval and follows the posterior predictive mean trend , it is deemed to have negligible interference .* otherwise , we conclude that the unit has substantial interference .this procedure is similar to the construction of control charts [ ] .when an observed outcome lies outside the central posterior predictive interval , we suspect existence of a special cause . as the entire product is manufactured simultaneously , we believe that the only reasonable assignable cause is interference .we implemented this procedure and observed that approximately 70%80% of units , primarily in the central regions of sections , have negligible interference ( appendix [ secposteriorpredictivecheck ] ) .this is clearly seen with another graph that assesses effective treatments , which we proceed to describe .taking expectations in ( [ eqmodelcompcylinder ] ) , the treatment effectively received by is we gauge by plugging observed data from the experiment and posterior draws of the parameters based on into ( [ eqtreatmentinterference ] ) .these discrepancy measure [ ] calculations , summarized in figure [ posteriorpredictivetreatment ] , again suggest that central angles in each section have negligible interference : estimates of their effective treatments correspond to their assigned treatments .there is a slight discrepancy between assigned treatments and inferred effective treatments for some central angles , but this is likely due to different parameter values for the two data sets . of more importanceis the observation that the effective treatment of a boundary angle is a weighted average of the treatment assigned to its section , , and its nearest neighboring section , , with the weights a function of the distances ( in radians ) between and the midpoint angle of its section , , and the midpoint angle of its nearest neighboring section , .all these observations correspond to the intuition that interference should be substantial near section boundaries .using ( [ eqtreatmentinterference ] ) .four horizontal lines in each subfigure denote the possible compensations , and dots denote estimates of treatments that units effectively received in the experiment . ]we first alter ( [ eqmodelcompcylinder ] ) to where \\[-8pt ] \nonumber & & { } + \bigl\ { 1 + \exp \bigl ( \lambda_{r_0 } |\theta_i - \theta_{i,\mathit{nm}}| - \lambda_{r_0 } | \theta_i - \theta_{i , m}| \bigr ) \bigr\}^{-1 } x_{i,\mathit{nm}},\end{aligned}\ ] ] with denoting midpoint angles for the -radian sections containing and neighboring nearest to , respectively , and compensations assigned to these sections .effective treatment is a weighted average of the unit s assigned treatment and the treatment assigned to its nearest neighboring section .although the form of the weights is chosen for computational convenience , we recognize that ( [ eqweightedtreatment ] ) belongs to a class of models agreeing with prior subject - matter knowledge that interference may be negligible if the implemented compensation plan is sufficiently `` continuous , '' in the sense that the theoretical compensation plan is a continuous function of and the tolerance of the 3d printer is sufficiently fine so that discretization of compensation is negligible ( appendix [ secnote ] ) .we fit the model in ( [ eqfullmodelcylinder ] ) and ( [ eqweightedtreatment ] ) , having total parameters , to the experiment data .the prior specification remains the same , with independently a priori for , and inches .a hmc algorithm was used to obtain draws from the joint posterior distribution after a burn - in of , and these are summarized in table [ posteriorpredictivetableexperimental ] ..9d1.9d2.9d3.15c@ & & & & & + & -1.06 10 ^ -2 & 1.53 10 ^ -4 & -1.06 10 ^ -2 & ( -1.09,-1.03 ) 10 ^ -2 & 8078 + & 5.79 10 ^ -3 & 3.69 10 ^ -5 & 5.79 10 ^ -3 & ( 5.72 , 5.86 ) 10 ^ -3 & 8237 + & 9.5 10 ^ -1 & 9.46 10 ^ -3 & 9.5 10 ^ -1 & ( 9.31 , 9.69 ) 10 ^ -1 & 8150 + & 1.12 & 6.64 10 ^ -3 & 1.12 & ( 1.0 , 1.13 ) & 8504 + & 7.1 10 ^ -3 & 1.43 10 ^ -4 & 7.1 10 ^ -3 & ( 6.82 , 7.39 ) 10 ^ -3 & 8404 + & 3.14 10 ^ -3 & 1.36 10 ^ -5 & 3.14 10 ^ -3 & ( 3.11 , 3.17 ) 10 ^ -3 & 8924 + & 32.66 & 2.05 & 32.62 & ( 28.69 , 36.76 ) & 8686 + & 48.24 & 2 & 48.12 & ( 44.5 , 52.6 ) & 8666 + & 76.83 & 1.78 & 76.78 & ( 73.42 , 80.44 ) & 8770 + & 86.08 & 0.83 & 86.06 & ( 84.49 , 87.68 ) & 8385 + ) , ( [ eqweightedtreatment ] ) for the inch cylinder. the vertical line is drawn at , marking the boundary between two sections .units to the left of this line were given compensation , and units to the right were given compensation .the posterior mean trend is represented by the solid line , and posterior quantiles are represented by dashed lines .observed data are denoted by dots .corresponding inferred effective treatment for .refined posterior predictions for inches , .comparing inferred effective treatments ( solid line ) with refined effective treatment model ( dashed line ) for the inch cylinder . ] this model provides a good fit for the and inch cylinders , but not the others . as an example , in figure [ posteriorpredictive3error](a ) the posterior mean trend does not correctly capture the observed transition across sections for the inch cylinder .the problem appears to reside in ( [ eqweightedtreatment ] ) .this specification implies that effective treatments of units for are equal - weighted averages of compensations applied to units . to assess the validity of this implication, we use the posterior distribution of the parameters to calculate , for each , the inferred effective treatment in ( [ eqtreatmentinterference ] ) .an example of these calculations , figure [ posteriorpredictive3error](b ) , shows that the inferred effective treatment for is nearly inch , the compensation applied to the right - side section .thus , specification ( [ eqweightedtreatment ] ) is invalidated by the experiment .another posterior predictive check helps clarify the problem . from ( [ eqweightedtreatment ] ) , and so which is well defined because in this experiment . plugging in the inferred effective treatments , calculated from ( [ eqtreatmentinterference ] ) , into ( [ eqinferredweightfunction ] ), we then diagnose how to modify ( [ eqweightedtreatment ] ) to better model interference in the experiment .this calculation was made for all cylinders , and the results for inches are summarized in figure [ radius3weight ] as an example .rows in this figure show the weights for each quadrant , and we focus on their behavior in neighborhoods of integral multiples of .neither the decay in the weights [ represented by in ( [ eqweightedtreatment ] ) ] nor the weight for integral multiples of remain constant across sections .in fact , these figures suggest that is a function of , and that a location term is required .they also demonstrate a possible , subtle quadrant effect and , as our experiment blocks on this factor , we are better able to use these posterior predictive checks to refine our simple interference model and capture this unexpected deformation pattern . in the interference model for the inch cylinder , using effective treatments calculated from equation ( [ eqtreatmentinterference ] ) , based on the posterior distribution of parameters from section [ secsimplemodelinterference ] and equation ( [ eqinferredweightfunction ] ) .vertical lines represent for , and numbers at the bottom of each subfigure represent assigned compensations . ]our refined effective treatment model is of the same form as ( [ eqweightedtreatment ] ) , with replaced by , and , respectively . here , represent location shifts across sections suggested by the previous posterior predictive checks .our specific model is \\[-8pt ] & & { } + \mathbb{i}\bigl(|x_{i , m } - x_{i,\mathit{nm}}| = 2\bigr ) \lambda_{r_0,2},\nonumber\end{aligned}\ ] ] where and is measured in absolute units of compensation . from figure [ radius3weight ] andthe fact that location shifts should be modeled using harmonic functions .this model provides a better fit . comparing figure [ posteriorpredictive3error](c ) , which displays posterior predictions from the refined model ( based on one chain of posterior draws using a standard random walk metropolis algorithm ) , with the previous model s predictions in figure [ posteriorpredictive3error](a ), we immediately see that the refined model better captures the posterior predictive mean trend .similar improvements exist for the other sections and cylinders .we also compare the original inferred effective treatments obtained from ( [ eqtreatmentinterference ] ) with the refined model in figure [ posteriorpredictive3error](d ) and again observe that the new model better captures interference .three key ingredients relating to the data , model , and experimental design have made our series of analyses possible , and are relevant and useful across a wide variety of disciplines .first is the availability of benchmark data , for example , every unit on the cylinder receiving zero compensation .second is the potential outcomes model ( [ eqmodelcompcylinder ] ) for compensation effects when there is no interference , defined in terms of a fixed number of parameters that do not depend on the compensation plan .these two enable calculation of the posterior predictive distribution of potential outcomes under the assumption of negligible interference .the final ingredient is the explicit distinction between units of analysis and units of interpretation in our design , which provides the means to assess and model interference in the experiment .comparing observed outcomes from the experiment to posterior predictions allows one to infer the structure of interference , which can be validated by further experimentation .these considerations suggest that our methodology can be generalized and applied to other experimental situations with units residing on connected surfaces . in general , when experimenting with units on a connected surface , a principled and step - by - step analysis using the three ingredients above , as illustrated in this paper , can ultimately shed more light on the substantive question of interest .to manufacture 3d printed products satisfying dimensional accuracy demands , it is important to address the problem of interference in a principled manner . recognized that continuous compensation plans implemented on printers with a sufficiently fine tolerance can effectively control a product s printed dimensions without inducing additional complications through interference .their models for product deformation motivated our experiment that introduces interference through the application of a discretized compensation plan to the boundary of a cylinder . combining this experiment s data with inferences based on data for which every unit received no compensation led to an assessment of interference in terms of how units effective treatments differed from that physically assigned .further analyses effectively modeled interference in the experiment .it is important to note that the refined interference model s location and scale terms ( [ eqdeltafourierexpansion ] ) , ( [ eqlambdarefined ] ) are a function of the compensation plan .for example , reflecting the assigned compensations across the y axis would accordingly change the location shifts .the implication of this and all our previous observations for manufacturing is that severely discretized compensation plans introduce interference , and , if this fact is ignored , then quality control of 3d printed products will be hindered , especially for geometrically complex products relevant in real - life manufacturing .many research challenges and opportunities for both statistics and additive manufacturing remain to be addressed .perhaps the most important is experimental design in the presence of interference .for example , when focus is on the construction of specific classes of products ( e.g. , complicated gear structures ) , optimum designs can lead to precise estimates of model parameters , hence improved compensation plans and control of deformation .an important and subtle statistical issue that then arises is how the structure of interference changes as a function of the compensation plan derived from the experimental design .instead of being a weighted average of the treatment applied to its section and nearest neighboring section , the derived compensation plan may cause a unit s effective treatment to be a weighted average of treatments applied to other sections as well , with weights depending on the absolute difference in applied compensations .knowledge of the relationship between compensation plans derived from specific experimental designs and interference is necessary to improve quality control in general , and therefore is an important issue to address for 3d printing .in all our analyses , we assumed the were independent . as pointed out by a referee ,when units reside on a constrained boundary , independence of error terms is generally unrealistic .however , we believe that our specific context helps justify this simplifying assumption for several reasons . first , the major objective driving our work on 3d printing is compensation for product deformation . to derive compensation plans ,it is important to accurately specify the mean trend in deformation .although incorporating correlation may change parameter estimates that govern the mean trend , we do not believe that modeling the correlation in errors will substantially help us compensate for printed product deformations .this is something we intend to address further in our future work . . here, the residual is defined as the difference between the observed deformation and the posterior mean of deformation for each angle . ] [ figresiduals ] second , there is a factor that may further confound the potential benefits of including correlated errors in our model : the resolution of the cad model . to illustrate , consider the model fit in section [ secnocompensationfit ] .we display the residual plots in figure [ figresiduals ] .all residuals are ( in absolute value ) less than of the nominal radius for inch and at most approximately of the nominal radius for inches , supporting our claim that we have accurately modeled the mean trend in deformation for these products .however , we note that for inches , there is substantial negative correlation in residuals between adjacent units , with the residuals following a high - frequency harmonic trend .there is a simple explanation for this phenomenon .our first manufactured products were inches , and the cad models for these products had low resolution .low resolution in the cad model yields the high - frequency pattern in the residual plots .the next product we constructed was inch , and its cad model had higher resolution than that previously used , which helped to remove this high - frequency pattern .minor trends appear to exist in this particular plot , and an acf plot formally reveals significant autocorrelations .accordingly , we observe that the correlation in residuals is a function of the resolution of the initial cad model . in consideration of our current data and our primary objective to accurately capture the mean trend in deformation , we use independent throughout .we intend to pursue this issue further in our future work , for example , in the direction of .furthermore , as pointed out by the associate editor , correlations in residuals for more complicated products may be accounted for by modeling the interference between units , which is precisely the focus of this manuscript .convergence of our mcmc algorithms was gauged by analysis of acf and trace plots , and effective sample size ( ess ) and [ ( ) , gr ] statistics , which were calculated using independent chains of draws after a burn - in of . in sections [ secnocompensationfit ] and [ secsimplemodelinterference ] , the ess were all above ( the maximum is ) , and the gr statistics were all .the results of the first procedure described in section [ secassessinginterference ] are displayed in figure [ posteriorpredictiveexperiment ] : bold lines represent posterior means , dashed lines quantiles forming the 99% central posterior intervals , and dots the observed outcomes in the experiment , with separate figures for each nominal radius and compensation .for example , the graph in the first row and column of figure [ posteriorpredictiveexperiment ] contains the observed data for angles in the inch radius cylinder that received compensation .this figure also contains the posterior predictive mean and 99% intervals for all angles under the assumption that compensation was applied uniformly to the cylinder .although only four sections of the cylinder received this compensation in the experiment , forming this distribution makes the posterior predictive mean trend transparent , and so helps identify when a unit s observed outcome deviates strongly from its prediction . , and compensation . ] [ posteriorpredictiveexperiment ]compensation is applied in practice by discretizing the plan at a finite number of points , according to some tolerance specified by the size ( in radians ) for each section or , alternatively , the maximum value of .suppose compensation plan is a continuous function of , and define with a monotonically decreasing continuous function , and then for the cylinders considered in our experiment , as .this is because as , and are grateful to xiao - li meng , joseph blitzstein , david watson , matthew plumlee , the editor , associate editor , and a referee for their valuable comments , which improved this paper .
additive manufacturing , or 3d printing , is a promising manufacturing technique marred by product deformation due to material solidification in the printing process . control of printed product deformation can be achieved by a compensation plan . however , little attention has been paid to interference in compensation , which is thought to result from the inevitable discretization of a compensation plan . we investigate interference with an experiment involving the application of discretized compensation plans to cylinders . our treatment illustrates a principled framework for detecting and modeling interference , and ultimately provides a new step toward better understanding quality control for 3d printing . , ,
mobile phones are becoming increasingly ubiquitous throughout large portions of the world , especially in highly populated urban areas and particularly in industrialized countries , where mobile phone penetration is almost .mobile phone providers regularly collect extensive data about the call volume , calling patterns , and the location of the cellular phones of their subscribers . in order for a mobile phone to place outgoing calls and to receive incoming calls, it must periodically report its presence to nearby cell towers , thus registering its position in the geographical cell covered by one of the towers .hence , very detailed information on the spatiotemporal localization of millions of users is contained in the extensive call records of any mobile phone carrier .if misused , these records - as well as similar datasets on buying habits , e - mail usage , and web - browsing , for instance - certainly pose a serious threat to the privacy of the users .however , the use of privacy - safe , anonymized datasets represent a huge scientific opportunity to uncover the structure and dynamics of the social network at different levels , from the small - scale individual s perspective to the large - scale , collective behavior of the masses , with an unprecedented degree of reach and accuracy . besides the inherent scientific interest of these issues , deeper insight into applications of great practical importancecould certainly be gained .for instance , urban planning , public transport design , traffic engineering , disease outbreak control , and disaster management , are some areas that will greatly benefit from a better understanding of the structure and dynamics of social networks .the use of mobile phone data as a proxy for social interaction has already proved successful in several recent investigations ._ have analyzed the structure of weighted call graphs arising from reciprocal calls that serve as signatures of work- , family- , leisure- or service - based relationships .a coupling between interaction strengths and the network s local structure was observed , with the counterintuitive consequence that social networks turn out to be robust to the removal of the strong ties but fall apart following a phase transition if the weak ties are removed .szab and barabsi have studied social network effects in the spread of innovations , products and new services .they investigated different mobile phone - based services and found the coexistence on the same social network of two distinct usage classes , with either very strong or very weak community - based segregation effects . in the context of urban studies and planning ,ratti _ et al ._ have considered the potential use of aggregated data from mobile phones and other hand - held devices .their mobile landscapes " project aims at the application of location based services to urban studies in order to gain insight into complex and rapidly changing urban dynamics phenomena .more recently , palla , barabsi and vicsek used mobile phone data to study the evolution of social groups .they found that large groups persist for longer times if they are capable of dynamically altering their membership , suggesting that an ability to change the group composition results in better adaptability .in contrast , the behavior of small groups displays the opposite tendency , the condition for long - term persistence being that their composition remains stable . in the following sections , we present new results that address novel aspects of human dynamics and social interactions obtained from extensive mobile phone data . in sect .2 we show how large - scale collective behavior can be described using aggregated data resolved in both time and space .we stress the importance of investigating large departures from the average and develop the basic framework to quantify anomalous fluctuations by means of standard percolation theory tools . in sect .3 we focus on the individual level and study patterns of calling activity . we show that the interevent time of consecutive calls is heavy - tailed , a finding that has implications for the dynamics of spreading on social networks . furthermore , by fixing the time of observation between consecutive calls it is possible to use the phone call data to characterize some aspects of human mobility .the spatial dependence of the call activity at any given time can be conveniently displayed by means of maps divided in voronoi cells , which delimit the area of influence of each transceiver tower or antenna .the voronoi tessellation partitions the plane into polygonal regions , associating each region with one transceiver tower .the partition is such that all points within a given voronoi cell are closer to its corresponding tower than to any other tower in the map .figure 1 shows activity maps for aggregated data corresponding to a 1-hour interval .the upper panel shows the activity pattern ( in log scale ) for a peak hour ( monday noon ) , while the lower panel shows the same urban neighborhood during an off - peak hour ( sunday at 9 am ) .the differences between both panels reflect the intrinsic rhythm and pulse of the city : we can expect call patterns during peak hours to be dominated by the hectic activity around business and office areas , whereas other , presumably residential and leisure areas can show increased activity during off - peak times , thus leading to different , spatially distinct activity patterns .besides different spatial patterns , each particular time of the day , as well as each day of the week , is characterized by a different overall level of activity .this phenomenon is shown by the plot at the center of figure 1 , in which aggregated data for a country is shown as a function of time ( data was binned in time intervals of 1 hour ) . as expected , the overall normalization of the aggregated pattern is lower during weekends than during weekdays , except around weekend midnights and early mornings , when many people go out . scale . ]the minimum spatial resolution is determined by either the typical distance between towers or , in rural regions with sparse tower density , by the reach of the radio - frequency signals exchanged between the mobile handset and the antenna ( typically ranging from a few hundred meters to several kilometers ) . to explore activity differences at larger scales ,the data of neighboring cells can be aggregated . at the expense of some loss of spatial resolution, aggregating data into larger spatial bins ( taking , e.g. , a regular spatial grid covering the entire country ) allows for better statistics and for a more stable activity pattern .that is , the number of calls made from a group of nearby cells at a certain time and day of the week is expected to be fairly constant , except for small statistical fluctuations .usually , activity patterns are strongly correlated with the daily pulse of populated areas ( such as those shown in fig . 1 ) and , at a larger scale , to variations in population density between different regions within the country .in contrast , departures from the mean expected activity are in general not trivially correlated with population density and describe instead interesting dynamical features .the measurement of fluctuations around the mean expected activity is of paramount importance , since it allows a quantitative measurement of anomalous behavior and , ultimately , of possible emergency situations .this indeed constitutes the base of proposed real - time monitoring tools such as the _ wireless phone - based emergency response _ ( wiper ) system .anomalous patterns indicative of a crisis ( such as the occurrence of natural catastrophes and terrorist attacks ) could be detected in real time , plotted on satellite and gis - based maps of the area , and used in the immediate evaluation of mitigation strategies , such as potential evacuation routes or barricade placement , by means of computer simulations .the call volume shows strong variations with time and day of the week , as shown in figure 1 , but differences across subsequent weeks are generally mild ( provided one considers call traffic in the same place , time and day of the week ) . to capture the weekly periodicity of the observed patterns , we define as the number of calls recorded at location ( which can either denote a single voronoi cell or a group of neighboring cells ) during the week between times and , where time is defined modulo 1 week .assuming we have access to continuous data for weeks , the mean call activity is given by note that , in the same way as one can trade off spatial resolution for increased statistics by summing over a group of voronoi cells , varying one can regulate time accuracy versus statistics .this certainly depends on the extent to which aggregated data shows a regular , stable behavior .the results presented here correspond to hour .= 5.2 in = 2.3 in the scale to measure departures from the average behavior is set by the _ standard deviation _ , defined as hence , using recorded data for an extended period of time , one can determine the expected call traffic levels and corresponding deviations for all times and locations .once this _ normal _ behavior is established , _ anomalous _ fluctuations above or below a given threshold can be obtained using the condition where is a constant that sets the fluctuation level .we grouped voronoi cells together generating a regular 2d grid made of square bins of about 12 km of linear size . considering a fixed time slice , we study the spatial clustering of bins showing anomalous activity at different fluctuation levels . in order to illustrate our procedure ,figure 2 shows the activity and fluctuations in a grid of size bins ( i.e. km area ) .we compare the activity in the same region for 2 different weeks ( corresponding to the same time and day of the week ) .the left panels show a _ normal event _ , in which fluctuations around the local mean activity are typically small , with just a few scattered bins having somewhat larger deviations . the right panels, however , show an _ anomalous event _ , characterized by extended , spatially correlated fluctuations that indicate the emergence of a large - scale , coordinated activity pattern . as pointed out above, the existence of anomalous activity patterns could be indicative of possible emergency situations .similarly to the voronoi maps already discussed , the upper panels in fig.2 show the activity ( number of calls per hour inside each square bin ) in log scale .white bins correspond to areas not covered by the mobile phone provider . taking a fixed threshold value , the bottom panels show the high - activity bins above the fluctuation threshold ( in black ) and the bins with normal activity ( in grey ) .note that , although the activity maps have a similar appearance to the degree that they seem at first look indistinguishable , the fluctuation maps display striking differences .= 5.2 in = 2.3 in in order to quantify the clustering of anomalous bins , we will use the standard tools of percolation theory and determine the size of the largest cluster , the number of different clusters , and the size distribution of all clusters .the statistical significance of the measured clustering is evaluated by comparing it to results from randomized distributions , in which many different configurations are randomly generated , keeping fixed the total number of high - activity bins above the fluctuation threshold .the substrate , which is formed by all bins with non - zero activity , remains always the same ( in fig.2 , for instance , the substrate is the set of all grey and black bins ) .clusters are defined by first- and second - order nearest neighbors in the square 2d grid . in the remainder of this section, we will focus on a specific large - scale anomalous event and compare it to the normal behavior observed in data of a different week ( but corresponding to the same time and day of the week ) .the comparison between normal and anomalous events will illustrate the use of percolation observables as diagnostic tools for anomaly detection .= 5.2 in = 3.2 in figure 3 shows the size of the largest cluster , , as a function of the fluctuation threshold , for the normal case ( left ) and the anomalous one ( right ) . each measured plot ( solid line with circles )is compared to results from randomized distributions .the latter correspond to the mean ( long - dashed line ) and confidence bounds at ( short - dashed lines ) and ( dotted lines ) , as obtained from generating 100 random configurations in each case . as expected ,the plots show that the size of the largest cluster monotonically decreases with the fluctuation threshold .however , while the clustering in the normal case lacks any significance , the anomalous event shows large departures from the clustering expected in a random configuration .in the same vein , figure 4 shows the number of different clusters , , as a function of the fluctuation threshold , where measurements on the call data for the same normal ( left ) and anomalous ( right ) events are compared to results from randomized configurations . as before ,in the normal case the number of clusters agrees well with the expectations for random configurations , while significant departures are observed in the anomalous case .figure 5 shows the cumulative size distribution of all clusters , , as a function of the cluster size , compared to random configurations .the upper panels display results for , while the bottom ones show results for , as indicated .moreover , the left panels correspond to the normal event , while the right panels to the anomalous event .again , the measured cluster size distribution in the normal case is in good agreement with the expected one for a random configuration .in contrast , the anomalous event shows the occurrence of a few very large clusters formed by many highly active bins .these unusually large structures can not be explained as arising just from random configurations , but instead are the result of the spatiotemporal correlation of large , highly active regions . as a summary , in this section we showed how large - scale collective behavior can be described using aggregated data resolved in both time and space .moreover , we developed the basic framework for detecting and characterizing spatiotemporal fluctuation patterns , which is based on standard procedures of statistics and percolation theory .these tools are particularly effective in detecting extended anomalous events , as those expected to occur in emergency scenarios due to e.g. natural catastrophes and terrorist attacks .in order to use the huge amount of data recorded by mobile phone carriers to investigate various aspects of human dynamics , a necessary starting point it is to characterize the dynamics of the individual calling activity _per se_. previous studies have measured the time between consecutive individual - driven events , such as sending e - mails , printing , and visiting web pages or the library .those events are described by heavy - tailed processes , challenging the traditional poissonian modeling framework , with consequences on task completion in computer systems . in this sectionwe explore the interevent distribution of the calling activity of mobile phone users during month . as many other human activities, the calling activity pattern is highly heterogeneous .while some users rarely use the mobile phone , others make hundreds or even thousands of calls each month . to analyze such different levels of activity , we group the users based on their total number of calls .within each group , we measure the probability density function of the time interval between two consecutive calls made by each user . as shown by the inset of fig .[ fig6 ] , the tail of the distribution is shifted to longer interevent times for users with less activity .however , if we plot as a function of , where is the average interevent time for the corresponding user , the data collapses into a single curve ( fig .[ fig6 ] ) .this indicates that the measured interevent distribution follows the expression = , where is independent from the average activity level of the population .this represents a universal characteristic of the system that surprinsingly also coincides with results from e - mail communication .the data are well fitted by where the power law scaling with exponent is followed by an exponential cutoff at days .equation ( [ eq : distr ] ) is shown by a solid line in the inset of fig .[ fig6 ] and its scaled version is presented in the main panel of the figure using hours , which is the average interevent time measured for the whole population .this result , clearly different from the one predicted by a poisson approximation , would for instance affect the predictions of spreading dynamics through the network of calls .to explore the interplay between human activity and mobility patterns , we fix the characteristic observation time to min and collect only those consecutive calls that occur with this interevent time , recording also the time of the day in which they occurred ( fig .[ fig7 ] a ) . for each pair of calls, we count how many of them result in a change of coordinate , e.g. the user traveled in the min time interval between the calls ( fig .[ fig7 ] b ) .the number of events that result in a change of location and the number of calls as a function of time capture the daily activity pattern of the users .we find that both the call and the mobility pattern decrease at night and have clear peaks near noon and late evening .there is a factor of between the largest and the smallest number of events ( calls / changes of location ) reported during the day .interestingly , when we calculate the fraction of consecutive calls also resulting in a potential change of location , the quantity varies at most during the whole day ( fig .[ fig7]c ) .this indicates that although the total activity varies strongly , the percentage of the people that are calling and traveling remains rather stable .more importantly , the average distance traveled within min .is stable in the vicinity of km ( fig .[ fig7]d ) , a value consistent for the combination between walk and motor transportation .novel aspects of human dynamics and social interactions were addressed by means of mobile phone data with time and space resolution .this allowed us to study the mean collective behavior at large scales and focus on the occurrence of anomalous events . considering a fixed time slice , we partitioned the space using a regular grid and studied the aggregated call activity inside each square bin forming the grid .we showed that anomalous events give rise to spatially extended patterns that can be meaningfully quantified in terms of standard percolation observables . by considering a series of consecutive time slices, we could investigate the rise , clustering and decay of spatially extended anomalous events , which could be relevant e.g. in real - time detection of emergency situations .we also investigated patterns of calling activity at the individual level .we observed that the interevent time of consecutive calls is heavy - tailed , a finding that has implications for dynamics of spreading phenomena on social networks , and that agrees with results previously reported on other , related human activities .we also show that , despite of the complexity inherent in the interevent calling patterns , it is still possible to recover some characteristic values from the behavior of the population that are stationary during the day , such as the fraction of active traveling population and their average distance traveled . in many ways , these results represent only a first step towards understanding human activity patterns .our results indicate that the rich information provided by mobile communication data open avenues to addressing novel problems .these tools offer a chance to improve our understanding of complex networks as well , by potentially correlating the structure of social networks with the spatial layout of the users as nodes , thus contributing to a better understanding of the spatiotemporal features of network evolution .this work was supported by the james s. mcdonnell foundation 21st century initiative in studying complex systems , the nsf within the dddas ( cns-0540348 ) , itr ( dmr-0426737 ) and iis-0513650 programs , as well as by u.s .office of naval research n00014 - 07-c and the nap project sponsored by the national office for research and technology ( kckha005 ) .data analysis was performed on the notre dame biocomplexity cluster supported in part by nsf mri grant no .dbi-0420980 .gonzlez and a .-barabsi , nature phys . * 3 * , 224 ( 2007 ) . j .-onnela , j. saramki , j. hyvnen , g. szab , d. lazer , k. kaski , j. kertsz , and a .-barabsi , proc .sci . * 104 * , 7332 ( 2007 ) .onnela , j. saramki , j. hyvnen , g. szab , m. a. de menezes , k. kaski , a .-barabsi , and j. kertsz , new j. phys .* 9 * , 179 ( 2007 ) .g. szab and a .-barabsi , arxiv : physics/0611177 . c. ratti , r.m .pulselli , s. williams , and d. frenchman , environment and planning b * 33 * , 727 ( 2006 ) . c. ratti , a. sevtsuk , s. huang , and r. pailer , _ location based services and telecartography _( springer , berlin , heidelberg , 2007 ) , sect .v , p. 433 .g. palla , a .-barabsi , and t. vicsek , nature * 446 * , 664 ( 2007 ) .g. palla , a .-barabsi , and t. vicsek , fluct .noise lett .* 7 * , l273 ( 2007 ) .r. pastor - satorras and a. vespignani , phys .lett . * 86 * , 3200 ( 2001 ) .s. eubank , h. guclu , v.s.a .kumar , m. marathe , a. srinivasan , z. toroczkai , and n. wang , nature * 429 * , 180 ( 2004 ) . c. vibpoud , o. bjonstadt , d.l .smith , l. simonsen , m. a. miller , and b.t .grenfell , science * 312 * , 447 ( 2006 ) .v. colizza , a. barrat , m. barthelemy , a .- j .valleron , and a.vespignani , plos medicine 4(1 ) : e13 ( 2007 ) .gonzlez and h.j herrmann , physica a * 340 * , 741 ( 2004 ) .g. madey , g. szab , and a .-barabsi , in _ lecture notes in computer science _ , v.n .alexandrov , g.d .van albada , p.m.a .sloot , and j. dongarra ( eds . ) , ( springer , berlin , 2006 ) vol . 3993 , p. 417t. schoenharl , r. bravo , and g. madey , int .contr . sys . *11 * , 209 ( 2007 ) .barabsi , nature * 435 * , 207 - 211 , ( 2005 ) . a .-barabsi and r. albert , science * 286 * , 509 ( 1999 ) .r. albert and a .-barabsi , rev .phys . * 74 * , 47 ( 2002 ) .s.n . dorogovtsev and j.f.f .mendes , _ evolution of networks : from biological nets to the internet and www _ ( oxford university press , oxford , 2003 ) .r. pastor - satorras and a. vespignani , _ evolution and structure of the internet _( cambridge university press , cambridge , 2004 ) . s. boccaletti , v. latora , y. moreno , m. chavez , and d .- u .hwang , phys . rep . *424 * , 175 ( 2006 ) .m. newman , a .-barabsi , and d.j .watts , _ the structure and dynamics of networks _ ( princeton university press , princeton and oxford , 2006 ) .g. caldarelli , _ scale - free networks _ ( oxford university press , oxford , 2007 ) .g. caldarelli and a. vespignani ( eds . ) , _ large scale structure and dynamics of complex networks _ ( world scientific , singapore , 2007 ) .s.s . manna and p. sen , phys .e * 66 * , 066114 ( 2002 ) .s. h. yook , h. jeong , and a .-barabsi , proc .usa * 99 * , 13382 ( 2003 ) .a. barrat , m. barthlmy , and a. vespignani , j. stat .p05003 ( 2005 ) .g. grinstein and r. linsker , phys .* 97 * , 130201 ( 2006 ) .gonzlez , p.g .lind , and h.j .herrmann , phys .* 96 * , 088702 ( 2006 ) .lind , j.s .andrade jr . , l.r .da silva , and h.j .herrmann , phys .e * 76 * , 036117 ( 2007 ) .lind , j.s .andrade jr . , l.r .da silva , and h.j .herrmann , europhys .lett . * 78 * , 68005 ( 2007 ) .
novel aspects of human dynamics and social interactions are investigated by means of mobile phone data . using extensive phone records resolved in both time and space , we study the mean collective behavior at large scales and focus on the occurrence of anomalous events . we discuss how these spatiotemporal anomalies can be described using standard percolation theory tools . we also investigate patterns of calling activity at the individual level and show that the interevent time of consecutive calls is heavy - tailed . this finding , which has implications for dynamics of spreading phenomena in social networks , agrees with results previously reported on other human activities .
in a column entitled `` fremde federn .i m gegenteil , '' ( very loosely , `` inappropriate attributions '' ) in the july 24 , 2006 issue of _ die welt _, a berlin newspaper , historian of science ernst peter fischer gave a name to a phenomenon of which some of us are aware , that sometimes ( often ? ) a physical discovery or law or a number is attributed to and named after a person who is arguably not the first person to make the discovery : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` das nullte theorem der wissenschaftsgeschichte lauten , dass eine entdeckung ( regel , gesetzmssigkeit , einsicht ) , die nach einer person benannt ist , nicht von dieser person herrhrt . '' + ( the zeroth theorem of the history of science reads that a discovery ( rule , regularity , insight ) , named after someone , ( _ often ? _ ) did not originate with that person . ) [ _ often ?_ added ] + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fischer goes on to give examples , some of which are + avogadro s _ number _( ) was first determined by loschmidt in 1865 ( although avogadro had found in 1811 that any gas at ntp had the same number of molecules per unit volume ) .+ halley s comet was known 100 years before halley noted its appearance at regular intervals and predicted correctly its next appearance ) .+ olber s paradox ( 1826 ) was discussed by kelper ( 1610 ) and by halley and cheseaux in the 18th century .+ fischer spoke of examples in the natural sciences , as do i. but numerous instances exist in other areas . in mathematicsthe theorem is known as arnold s principle or arnold s law , after the russian mathematician v. i. arnold .arnold s law was enunciated by m. v. berry some years ago to codify arnold s efforts to correct inappropriate attributions that neglect russian mathematicians .berry also proposed berry s law , a far broader , self - referential theorem : `` nothing is ever discovered for the first time . '' if the zeroth theorem were known as fischer s law , it would be a clear example of arnold s principle .as it is , the zeroth theorem stands as an illustration of berry s law . + in each example i present the bare bones of the issue - the named effect , the generally recognized `` owner , '' the prior `` claimant '' , with dates .after briefly describing the protagonists origins and careers , i quote from the appropriate literature to establish the truth of the specific example .my first example is the lorentz condition that defines the lorentz gauge for the electromagnetic potentials and . the relation was specified by the dutch theoretical physicist hendrik antoon lorentz in 1904 in an encyclopedia article . in his notation the constraint reads : + or , in covariant form , where and .eq.([lorentz ] ) or ( [ lorentzc ] ) is so famous and familiar that any citation of it will be to some textbook .if it is ever actually traced back to lorentz , the reference will likely the cited encyclopedia article or his book , _ theory of electrons _ , published in 1909 .lorentz was not the first to point out eq.([lorentz ] ) .thirty - seven years earlier , in 1867 , the danish theorist ludvig valentin lorenz , writing about the identity of light with the electromagnetism of charges and currents , stated the constraint on his choice of potentials . his version of eq.([lorentz ] ) reads : where is the scalar potential and are the components of the vector potential .the strange factors of 2 and 4 appearing here and below have their origins in a since abandoned definition of the electric current in terms of moving charges . in 1900 , in a festschrift volume marking the 25th anniversary of h. a. lorentz s doctorate , the prussian theorist emil johann wiechert described the introduction of the scalar and vector potentials into the maxwell equations in much the way it is done in textbooks today . he notes that the divergence of the vector potential is not constrained by the relation and imposes the condition , in his notation where the vector potential is and is the speed of light : ludvig valentin lorenz was born in 1829 in helsingr , denmark of german - huguenot extraction .after gymnasium , in 1846 he entered the polytechnic high school in copenhagen , which had been founded by rsted in the year of lorenz s birth .he graduated as a chemical engineer from the university of copenhagen in 1852 . with occasional teaching jobs ,lorenz pursued research in physics and in 1858 went to paris to work with lam among others .an examination essay on elastic waves resulted in a paper of 1861 , where retarded solutions of the wave equation were first published . on his return to copenhagen, he published on optics ( 1863 ) and the identity of light with electromagnetism already mentioned . in 1866he was appointed to the faculty of the military high school outside copenhagen and also elected a member of the royal danish academy of sciences and letters .after 21 years at the military high school , lorenz obtained the support of the carlsberg foundation from 1887 until his death in 1891 . in 1890 his last paper ( only in danish ) was a detailed treatment on the scattering of radiation by spheres , anticipating what is known as `` mie scattering '' ( 1908 ) , another example of the zeroth theorem , not included here .emil johann wiechert was born in tilsit , prussia . when he was 18 he and his widowed mother moved to knigsberg where he attended first a gymnasium and then the albertus university .he completed his ph.d . on elastic waves in 1889 and began as a lecturer and researcher in knigsberg in 1890 .his research , both experimental and theoretical , encompassed electromagnetism , cathode rays , and electron theory , as well as geophysical topics such as the mass distribution within the earth . in 1897 he was invited to gttingen , first as _ prizatdozent _ and then in 1898 made professor and director of the gttingen geophysical institute , the first of its kind .wiechert remained at gttingen for the rest of his career , training students and working in geophysics and seismology , with occasional forays into theoretical physics topic such as relativity , and electron theory .he found his colleagues , felix klein , david hilbert , and his former mentor woldemar voigt , congenial and stimulating enough to turn down numerous offers of professorships elsewhere . in physicswiechert s name is famous for the linard - wiechert potentials of a relativistic charged particle ; in geophysics , for the wiechert inverted - pendulum seismograph .hendrik antoon lorentz was born in arnhem , the netherlands , in 1853 .after high school in arnhem , 1866 - 69 , he attended the university of leiden where he studied physics and mathematics , graduating in 1872 .he received his ph.d . in 1875 for a thesis on aspects on the electromagnetic theory of light .his academic career began in 1878 when at the age of 24 lorentz was appointed professor of theoretical physics at leiden , a post he held for 34 years .his research ranged widely , but centered on electromagnetism and light .his name is associated with lorenz in the lorenz - lorentz relation between the index of refraction of a medium and its density and composition ( lorenz , 1875 ; lorentz , 1878 ) .notable were his works in the 1890s on the electron theory of electromagnetism ( now called the microscopic theory , with charged particles at rest and in motion as the sources of the fields ) and the beginnings of relativity ( the fitzgerald - lorentz length contraction hypothesis , 1895 ) and a bit later the lorentz transformation .lorentz shared the 1902 nobel prize in physics with pieter zeeman for `` their researches into the influence of magnetism upon radiation phenomena . ''he received many honors and memberships in learned academies and was prominent in national and international scientific organizations until his death in 1928 .+ [ h ] lorenz s 1867 paper establishing the identity of light with electromagnetism was evidently written without knowledge of maxwell s famous work of 1865 .he begins with the quasi - static potentials , with the vector potential in the kirchhoff - weber form , and proceeds toward the differential equations for the fields . actually , following the continental approach of helmholtz , lorenz uses electric current density instead of electric field , noting the connection via ohm s law and the conductivity ( called _ k _ ) .but before proceeding to his full theory , lorenz establishes that his generalization of the quasi - static limit is consistent with all known observations to date . using a retarded form of the scalar potential , he demonstrates by expanding the retarded time in powers of , where is a velocity parameter , that his retarded scalar potential and an emergent retarded vector potential yield the same electric field as the instantaneous kirchhoff - weber forms to second order in the presumably small .furthermore , by clever choices of the velocity , he is able to show a restricted class of what are now called gauge transformations involving the neumann and kirchhoff - weber forms of the vector potential , although he does not emphasize the point .because he is including light within his framework , he is not content with the quasi - static approximation. he proceeds to define the current density ( electric field ) components as , write a retarded form for the scalar potential , called , and then present the ( almost ) familiar expressions for the current density / electric field in terms of the scalar and vector potential : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` hence the equations for the propagation of electricity , as regards the experiments on which they rest , are just as valid as [ the quasi - static equations ] if [ ... ] the following form be assigned to them , where , for brevity s sake , we put these equations are distinguished from equations ( i ) [ the kirchhoff - weber forms ] by containing , instead of , the somewhat less complicated members ; and they express further that the entire action between free electricity and the electric currents _ requires time to propagate itself _........ '' + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ i judge from the throw - away phrases , `` are just as valid '' and `` for brevity s sake , '' and earlier remarks distinguishing his form of the vector potential from the kirchhoff - weber form ( in addition to having retardation ) , that lorenz understood gauge invariance without formally introducing the concept . in passing itis curious to note than in a paper published a year later , maxwell criticized lorenz s ( and riemann s ) use of retarded potentials , claiming that they violated conservation of energy and momentum .but lorenz had referred to his 1861 paper on the propagation of elastic waves to observe that the wave equation is satisfied by retarded sources . in his march toward the differential equations for the `` fields , '' lorenz notes that with his choice of the scalar and vector potentials : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we obtain + moreover from ( 5 ) , `` and in like manner for . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lorenz then proceeds to derive the ampre - maxwell equation relating the curl of the magnetic field to the sum of the displacement current and the conduction current density and goes on to obtain the other equations equivalent to maxwell s . in the last part of his paper , lorenz sets himself the task of reversing his path , beginning with his differential equations for the fields , which he views as describing light , and working back toward his form of the retarded potentials .imposition of eq.([lorenz ] ) leads to the simple wave equations whose solutions , as he proved in 1861 , are the standard retarded potentials in what is known as the loren(t)z gauge .lorenz stresses that : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` this result is a new proof of the identity of the vibrations of light with electrical currents ; for it is clear now , not only that the laws of light can be deduced from those of electrical currents , but that the converse way may be pursued . . . . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in his 1900 lorentz festschrift paper,``elektrodynamische elementargestze , '' wiechert begins by summarizing the theory of optics , introducing two reciprocal transverse vector fields in free space without sources ( his , his ). they have zero divergences , and satisfy coupled curl equations ( faraday and ampre - maxwell ) and separate wave equations .he then expresses the magnetic field in terms of the vector potential ( his ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` wir wollen auswhlen und das potential mit bezeichen , dann is zu setzen : damit wird noch nicht bestimmt ; vor allen kommt in betracht , dass der werth von willkrlich bleibt ; eine passende verfgung behalten wir uns vor . ''( [ div ] remains arbitrary ; we keep an appropriate choice in mind . )_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ wiechert then eliminates the magnetic field in favor of the vector potential in faraday s law and finds the equivalent of our , where is the scalar potential ( his ) .eliminating the potentials in the source - free coulomb s law and ampre - maxwell equation leads him to : and a corresponding equation for .wiechert then states : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ uber die unbestimmtheit in verfgend stezen wir nun : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is wiechert s version of the loren(t)z condition , already quoted as eq.([wiechert ] ) .he then states that this relation simplifies the wave equations into the standard form and that the two wave equations , the loren(t)z condition , and the definitions of the fields in terms of the potentials are equivalent to the maxwell equations ( for free fields ) . later in his paper, wiechert adds charge and current sources to the equations and states the retarded solutions for his potentials .other authors are cited , but not lorenz .thirty - seven years after lorenz and four years after wiechert , lorentz wrote two encyclopedia articles , the second of which contains on page 157 a discussion remarkably parallel ( in reverse order ) to that quoted earlier from lorenz : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ skalaren potentials und eines magnetischen vektorpotentials darstellen .es gengen diese hilfsgrssen den differentialgleichungen + es ist zwischen den potentialen besteht die relation + + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lorentz s ( 2 ) is the loren(t)z condition displayed above as eq.([lorentz ] ) . in an appendix in _ theory of electrons_(1909 ) , he discusses gauge transformations and potentials that do not satisfy the loren(t)z condition , but then states that he will always use potentials that satisfy eq.(1 ) . + while some may argue that lorenz s 1967 paper was not a masterpiece of clarity , i would argue that his multifaceted approach makes clear that thirty - three years before wiechert ( a ) he understood the arbitariness and equivalence of the different forms of potentials , and ( b ) he understood that eq.([lorenz ] ) is a requirement for the retarded form of the neumann ( loren(t)z gauge ) potentials .the second example is the dirac delta function , popularized by paul adrien maurice dirac , british theoretical physicist , in his authoritative text _ the principles of quantum mechanics _ , first published in 1930 .there he introduces the ( improper ) impulse or delta function in his discussion of the orthogonality and completeness of sets of basis functions in the continuum .his first definition is + given the usefulness of the delta function in practical , if not rigorous , mathematics , it is not surprising that the delta function had `` discoverers '' before dirac .oliver heaviside , self - taught english electrical engineer , applied mathematician , and physicist , is arguably the person who should get credit for the introduction of the delta function .thirty - five years before dirac , in the march 15 , 1895 issue of the british journal _ the electrician _ , he described his impulsive function in mathematical terms as + here is the heaviside or step function ( for , for , and ) .the origins of the delta function can be traced back to the early 19th century . cauchy and poisson , and later hermite , used a function : + within double integrals in proof of the fourier - integral theorem and took the limit at the end of the calculation . in the second half of the century kirchhoff , kelvin , and helmholtz in other applicationsused similarly a function : + while these sharply peaked functions presage the delta function , it was heaviside and then dirac who gave it explicit , independent status .oliver heaviside was born in london , england in 1850 .illness in his youth left him partially deaf .though an outstanding student , he left school at 16 to become a telegraph operator , with the help of his uncle charles wheatstone , wealthy inventor of the telegraph .studying in his spare time , he began serious analysis of electromagnetism and publication in 1872 while working in newcastle .two years later , illness prompted him to resign his position to pursue research in isolation at his family home .there he conducted investigations of the skin effect , transmission line theory , and the beneficial influence of distributed inductance in preventing distortion and diminishing attenuation . by 1885heaviside eliminated the potentials from maxwell s theory and expressed it in terms of the four equations in four unknown fields , as we known them today .he , together with fitzgerald and hertz , are credited with taking the mystery out of maxwell s formulation .he is also responsible for introducing vector notation , independently of and contemporaneously with gibbs ; he discovered `` poynting s theorem '' independently and found the `` lorentz '' force of a magnetic field on a moving charged particle . in 1888 - 89heaviside evaluated the distorted patterns of the fields of a charge moving in vacuum and in a dielectric medium , the first influencing fitzgerald to think about a possible explanation of the michelson - morley experiment , and the second essentially a prediction of cherenkov radiation . in the 1880s and 1890s he perfected and published his operational calculus for the benefit of engineers . in 1902 , kennelly and heaviside independently proposed a conducting region in the upper atmosphere ( kennelly - heaviside layer ) as responsible for the long - distance propagation of telegraph signals around the earth .self - educated and a loner , heaviside jousted in print with `` the cambridge mathematicians '' and was long ignored by the scientific establishment ( with some notable exceptions ) .he finally received recognition , becoming a fellow of the royal society in 1891 .he died in 1925 .+ paul dirac was born in bristol , england in 1902 of a english mother and swiss father . educated in bristol schools , including the technical college where his father taught french , dirac studied electrical engineering at the university of bristol , obtaining his b. eng . in 1921 .he decided on a more mathematical career and completed a degree in mathematics at bristol in 1923 .he then went to st .john s college , cambridge where he studied and published under the supervision of r. h. fowler .fowler showed him the proofs of heisenberg s first paper on matrix mechanics ; dirac noticed an analogy between the poisson brackets of classical mechanics and the commutation relations of heisenberg s theory .the development of this analogy led to his ph.d .thesis,``quantum mechanics , '' and publication in 1926 of his mathematically consistent general theory of quantum mechanics in correspondence with hamiltonian mechanics , an approach distinct from heisenberg s and schrdinger s .dirac became a fellow of st .john s college in 1927 , the year he published his paper on the `` second '' quantization of the electromagnetic field . the relativistic equation for the electron followed in 1928 .his treatise _ the principles of quantum mechanics _ ( first edition , 1930 ) gave a masterful general formulation of the theory .elected fellow of the royal society in 1930 and as lucasian professor of mathematics at cambridge in 1932 , he shared the 1933 nobel prize in physics with schrdinger `` for the discovery of new productive forms of atomic theory . ''dirac made many other important contributions to physics - antiparticles , the quantization of charge through the existence of magnetic monopoles , the path integral approach , ... .he retired in 1969 and in 1972 accepted an appointment at florida state university where he remained until his death in 1984 . + [ h ] from 1894 to 1898oliver heaviside was publishing his operational calculus in _ the electrician_. in the march 15 , 1895 issue he devoted a section to `` theory of an impulsive current produced by a continued impressed force . '' in it is the following partial paragraph : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` we have to note that if is any function of time , then is its rate of increase . if , then , as in the present case , is zero before and constant after , is then zero except when .it is then infinite .but its total amount is ._ that is to say means a function of which is wholly concentrated at the moment , of total amount .it is an impulsive function , so to speak ._ the idea of an impulse is well known in mechanics , and it is essentially the same here . unlike the function ,the function does not involve appeal either to experiment or to generalised differentiation , but only involves the ordinary ideas of differentiation and integration pushed to their limit . ''[ emphasis added ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in subsequent articles heaviside used his impulse function extensively to treat various examples of the excitation of electrical circuits . in diracs _ the principles of quantum mechanics _ ( 1930 ) he introduces the delta function and gives an extensive discussion of its properties and uses . in subsequent editionshe alters the treatment somewhat ; i quote from the third edition ( 1947 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` * 15 .the function * our work in 10 led us to consider quantities involving a certain kind of infinity . to get a precise notation for dealing with these infinities ,we introduce a quantity depending on a parameter satisfying the conditions + + to get a picture of , _ take a function of the real variable which vanishes everywhere except inside a small domain , of length say , surrounding the origin , and which is so large inside this domain that its integral over the domain is unity_. the exact shape of the function inside this domain does not matter , provided there are no unnecessarily wild variations ( for example provided the function is always of order ) . then in the limit this function will go over into . ''[ emphasis added ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a page later , dirac gives an alternative definition : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` an alternative way of defining the function is as the differential coefficient of the function given by we may verify that this is equivalent to the previous definition . . . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this definition is explicitly heaviside s definition ( ) .and the descriptions in words are strikingly similar , but what else could they be ? +the third example concerns `` schumann resonances , '' the extremely low frequency ( elf ) modes of electromagnetic waves in the resonant cavity formed between the conducting earth and ionosphere .dimensional analysis with the speed of light and the circumference of the earth gives an order of magnitude for the frequency of the lowest possible mode , .the spherical geometry , with legendre functions at play , leads to a series of elf modes with frequencies , for the cavity between two perfectly conducting spherical surfaces .it was perhaps j. j. thomson who first solved for these modes ( 1893 ) , with many others subsequently .it was winfried otto schumann , a german electrical engineer , who in 1952 applied the resonant cavity model to the earth - ionosphere cavity , although others before him had used wave - guide concepts . because the earth and especially the ionosphere are not very good conductors , the resonant lines are broadened and lowered in frequency , but still closely following the legendre function rule , with . in a series of papers from 1952 to 1957 , schumann discussed damping , the power spectrum from excitation by lightning , and other aspects .since their first clear observation in 1960 , the striking resonances have been studied extensively . + although schumann can be said to have initiated the modern study of extreme elf propagation and many have been occupied with the peculiarities of long - distance radio transmission since kennelly and heaviside , two names emerge as earlier students of at least the lowest elf mode around the earth .those names and dates are nicola tesla , serbian - american inventor , physicist , and engineer , ( 1905 ) and george francis fitzgerald , irish theoretical physicist , ( 1893 ) . indeed , there are those that claim that tesla actually observed the resonance .george francis fitzgerald was born in 1851 near dublin and home - schooled ; his father was a minister and later a bishop in the irish protestant church .he studied mathematics and science at the university of dublin , receiving his b.a . in 1871 .for the next six years he pursued graduate studies , becoming a fellow of trinity college , dublin in 1877 .he served as college tutor and as a member of the department of experimental physics until 1881 when he was appointed professor of natural and experimental philosophy , university of dublin .fitzgerald s researches were largely but not exclusively in optics and electromagnetism .working out the amount of radiation emitted by discharging circuits in 1883 , he foresaw the possibility of hertz s experiments ; in 1889 he had the intuition that a length contraction proportional to in the direction of motion could explain the null effect of the michelson - morley experiment ( fitzgerald - lorentz contraction ) .he was elected fellow of the royal society in 1883 .a model professional citizen , fitzgerald served as officer in scientific societies , as external examiner in britain , and on irish committees concerned with national education .he died in 1901 at the early age of 49 .+ [ htp ] nikola tesla was born in smiljan , croatia in 1856 of serbian parents .he studied electrical engineering at the technical university in graz , austria and at prague university .he worked in paris as an engineer , 1882 - 83 , and then in 1884 emigrated to the us where he worked for a short time for thomas edison .but in may 1885 tesla switched to work for edison s competitor , george westinghouse , to whom tesla sold his patent rights for a - c dynamos , polyphase transformers , and a - c motors .later he set up an independent laboratory to pursue his inventions .he became a us citizen in 1891 , the year he invented the tesla coil . for six or seven months in 1899 - 1900tesla was based in colorado springs where he speculated about terrestrial standing waves and conducted various startling experiments such as man - made lightning bolts up to 40 meters in length . in 1900he moved to long island where he began to built a large tower for long - distant transmission of electromagnetic energy . in his lifetimehe had hundreds of patents .although in later life he was discredited for his wild claims and died impoverished in 1943 , he is recognized as the father of the modern a - c high - tension power distribution system used worldwide . winfried otto schumann was born in tbingen , germany in 1888 , the son of a physical chemist .he studied electrical engineering at the technische hochschule in karlsruhe , earning his first degree in 1909 and his dr .- ing . in 1912 .he worked in electrical manufacturing until 1914 ; during world war i he served as a radio operator . in 1920schumann was appointed as associate professor of technical physics at the university of jena . in 1924he became professor for theoretical electrical engineering , technische hochschule , munich ( now the technical university ) where he remained until retirement , apart from a year ( 1947 - 48 ) at the wright - patterson air force base in ohio .schumann s early research was in high - voltage engineering . in munich , for 25 years his interests were in plasmas and wave propagation in them . then from 1952 to 1957 , as already noted , he worked on elf propagation in the earth - ionosphere cavity .later , into retirement after 1961 , his research was in the motion of charges in low - frequency electromagnetic fields . schumann died in 1974 at the age of 86 .+ [ htp ] in 1900 tesla filed a patent application entitled , `` art of transmitting electrical energy through the natural mediums . ''the united states patent offioce granted him the patent no .787,412 on april 18 , 1905 . to convey the thrust of tesla s reasoning regarding the transmission of very low frequency electromagnetic energy over the surface of the earth , i quote important excerpts .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` ... for the present it will be sufficient to state that the planet behaves like a perfectly smooth or polished conductor of inappreciable resistance with capacity and self induction uniformly distributed along the axis of symmetry of wave propagation and transmitting slow electrical oscillations without sensible distortion or attenuation ..... '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tesla treats the earth as a perfectly conducting sphere in infinite space .he does not know of the ionosphere or conduction in the atmosphere . + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` first .the earth s diameter passing through the pole should be an odd multiple of the quarter wave length - that is , of the ratio of the velocity of light - and four times the frequency of the currents . ''_ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here tesla seems to be thinking of propagation through the earth .his description translates into a frequency of oscillation . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` .... to give an idea , i would say that the frequency should be smaller than twenty thousand per second , though shorter waves might be practicable . the lowest frequency would appear to be six per second , .... '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tesla is thinking of power transmission , not radiation into space , and so is keeping the frequency down , being his minimum .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` third .the most essential requirement is , however , that irrespective of frequency the wave or wave - train should continue for a certain interval of time , which i estimated to be not less than one twelfth or probably 0.08484 of a second and which is taken passing to and returning from the region diametrically opposite the pole over the earth s surface with a mean velocity of about four hundred and seventy - one thousand two hundred and forty kilometers per second . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the stated speed , given with such accuracy , is times the speed of light _c_. it makes the time for a pulse to travel over the surface from pole to pole equal to the time taken at speed _c _ along a diameter .it would be natural to wish a pulse to have a certain duration if resonant propagation were envisioned , but the special significance of 0.08484 seconds is puzzling .equating the surface time to the diameter time seems to tie back to his use of the diameter to find the frequencies .+ that tesla had ideas about low frequency electromagnetic modes encompassing the whole earth is clear .but he did not envision the conducting layer outside the earth s surface that creates a resonant cavity .there is no evidence that he ever observed propagation around the earth . and a decade earlier , fitzgerald discussed the phenomenon realistically .+ in september 1893 fitzgerald presented a paper at the annual meeting of the british association for the advancement of science . an anonymous correspondent gave a summary of fitzgerald s talk in _ i quote first from the report of the british association , which seems to be an abstract , submitted in advance of the meeting : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` professor j. j. thomson and mr .o. heaviside have calculated the period of vibrations of a sphere alone in space and found it about 0.59 second .the fact that the upper regions of the atmosphere conduct makes it possible that there is a period of vibration due to the vibrations similar to those on a sphere surrounded by a concentric spherical shell . in calculating this caseit is not necessary to consider propagation in time for an approximate result , . .the value of the time of vibration obtained by this very simple approximation is applying this to the case of the earth with a conducting layer at a height of 100 kilometres ( much higher than is probable ) it appears that a period of vibration of about one second is possible . a variation in the height of the conducting layer produces only a small effect upon this if the height be small compared to the diameter of the earth . . . ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fitzgerald s mention of one second is a bit curious , but may be a typographical error . in the limit of , his formula yields , a value that is off by just from the correct for perfect conductivity . + in the account of the ba meeting in the september 28 , 1893 issue of _ nature _, the reporter notes that `` professor g. f. fitzgerald gave an interesting communication on ` the period of vibration of disturbances of electrification of the earth . ' '' he notes the following points made by fitzgerald : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \1 . `` . . .the hypothesis that the earth is a conducting body surrounded by a non - conductor is not in accordance with the fact .probably the upper regions of our atmosphere are fairly good conductors . ''we may assume that during a thunderstorm the air becomes capable of transmitting small disturbances . ''if we assume the height of the region of the aurora to be 60 miles or 100 kilometres , we get a period of oscillation of 0.1 second . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ now the period of vibration is correct .+ it is clear that in 1893 fitzgerald had the right model , got roughly the right answer for the lowest mode , and had the prescience to draw attention to thunderstorms , the dominant method of excitation of schumann resonances .my fourth example is the weizscker - williams method of virtual quanta , a theoretical approach to inelastic collisions of charged particles at high energies in which the electromagnetic fields of one of the particles in the collision are replaced by an equivalent spectrum of virtual photons .the process is then described in terms of the inelastic collisions of photons with the `` target . ''c. f. von weizscker and e. j. williams were both at niels bohr s institute in copenhagen in the early 1930s when the validity of quantum electrodynamics at high energies was in question .their work in 1934 - 35 played an important role in assuaging those fears .the concept has found wide and continuing applicability in particle physics , beyond purely electrodynamic processes .+ but they were not the first to use the method .ten years earlier , in 1924 , even before the development of quantum mechanics , enrico fermi discussed the excitation and ionization of atoms in collisions with electrons and energy loss using what amounts to the method of virtual quanta . fermi was focused mainly on nonrelativistic collisions ; a key aspect of the work of weizscker and williams , the appropriate choice of inertial frame in which to view the process , was missing . nevertheless , the main ingredient , the equivalent spectrum of virtual photons to replace the fields of a charged particle , is fermi s invention .one of the last `` complete '' physicists , enrico fermi was born in rome .his father was a civil servant . at an early age he took an interest in science , especially mathematics and physics .he received his undergraduate and doctoral degrees from the scuola normale superiore in pisa .after visiting gttingen and leiden in 1924 , he spent 1925 - 26 at the university of florence where he did his work on what we call the fermi - dirac statistics of identical spin 1/2 particles .he then took up a professorship in rome where he remained until 1938 .he soon was leading a powerful experimental group that included edoardo amaldi , bruno pontecorvo , franco rasetti , and emilio segr .initially , their work was in atomic and molecular spectroscopy , but with the discovery of the neutron in 1932 the group soon switched to nuclear transmutations induced by slow neutrons and became preeminent in the field . stimulated by the solvay conference in fall 1933 , where the neutrino hypothesis was sharpened , fermi quickly created his theory of beta decay in late 1933/early 1934 .he was awarded the nobel prize in physics for the nuclear transmutation work in 1938 .he took the opportunity to emigrate from sweden to the u.s . in december that year , just as the news of the discovery of neutron - induced nuclear fission became public .initially at columbia , fermi moved to the university of chicago where , once the manhattan district was created , he was in charge of construction of the first successful nuclear reactor ( 1942 ) .later he was at los alamos .after the war he returned to chicago to build a synchrocyclotron powerful enough to create pions and permit study of their interactions .in his nine years at chicago he mentored a very distinguished group of ph.d .students , five of whom later became nobel laureates .+ [ htp ] carl friedrich von weizscker , son of a german diplomat , was born in kiel , germany . from 1929 to 1933he studied physics , mathematics , and astronomy in berlin ( with schrdinger ) , gttingen ( briefly , with born ) , and leipzig , where he was heisenberg s student , obtaining his ph.d . in 1933 .he was at bohr s institute , 1933 - 34 , where he did the work we discuss here .in the 1930s his research was in nuclear physics and astrophysics .notable was his work on energy production in stars , done contemporaneously with hans bethe . during world war ii he joined heisenberg in the german atomic bomb effort .he is credited with realizing that plutonium would be an alternative to uranium as a fuel for civilian energy production .after the war , he was the spokesman for the view that the german project was aimed solely at building a nuclear reactor , not a bomb . in 1946he went to the max planck institute in gttingen .his interests broadened to the philosophy of science and technology and their interactions with society .he became professor of philosophy at the university of hamburg in 1957 .then in 1970 until his retirement in 1980 , he was the director of a max planck institute for the study of the living conditions of the scientific - technological world .the later weizscker was a prolific author on the philosophy of science and society , and an activist on issues of nuclear weapons and public policy .+ [ h ] evan james williams was born in cwmsychpant , wales , and received his early education at llandysul county school where he excelled in literary and scientific pursuits .a scholarship student at the university of wales , swansea , he graduated with a m.sc . in 1924 .williams then studied for his ph.d . at the university of manchester under w. l. bragg ; a further ph.d .was earned at cambridge in 1929 and a welsh d.sc .a year later .his research was in both experiment and theory .nuclear and cosmic ray studies led to theoretical work on quantum mechanical calculations of atomic collisions and energy loss .he spent 1933 at bohr s institute in copenhagen where he worked in a loose collaboration with bohr and weizscker .he then held positions at manchester and liverpool before accepting the chair of physics at university of wales , aberystwyth in 1938 . elected fellow of the royal society in 1939 , a year later williams and g. e. roberts used a cloud chamber to make the first observation of muon decay .he served in the air ministry and admiralty during world war ii .his career was cut short in 1945 at age 42 . in his review of the penetration of charged particles in matter published in 1948, niels bohr laments that the review had originally been intended to be a collaboration with williams . + a newly minted ph.d . in 1924, enrico fermi addressed the excitation of atoms in collisions with electrons and the energy loss of charged particles in a novel way . the abstract of his paper is _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ das elektrische feld eines geladenen teilchens , welches an einem atom vorbeifliegt , wird harmonisch zerlegt , und mit dem elektrischen feld von licht mit einer passenden frequenzverteilung verglichen .es wird angenommen , dass die wahrscheinlichkeit , dass das atom vom vorbeifliegenden teilchen angeregt oder ionisiert wird , gleich ist der wahrscheinlichkeit fr die anregung oder ionisation durch die quivalente strahlung .diese annahme wird angewendet auf die anregung durch elektronenstoss und auf die ionisierung und reichweite der -strahlen ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ a rough literal translation is _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` the electric field of a charged particle that passes by an atom , when decomposed into harmonics , is equivalent to the electric field of light with an appropriate frequency distribution. it will be assumed that the probability that an atom will be excited or ionized by the passing particle is equal to the probability for excitation or ionization through the equivalent radiation .this hypothesis will be applied to the excitation through electron collisions and to the ionizing power and range of -particles . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ that first sentence describes a key ingredient of the weizscker - williams method of virtual quanta .because he was working before quantum mechanics had emerged , fermi had to use empirical data for the photon - induced ionization and excitation of atoms to fold with the equivalent photon distribution .explicitly , fermi s expression for the probability of inelastic collision of a charged particle and an atom , to be integrated over equivalent photon frequencies and impact parameters of the collision , is + where is the nonrelativistic limit of the equivalent photon flux density and is the photon absorption coefficient . for k - shell ionization , for example ,an approximate form is + where is the k - shell threshold and _ h _ is an empirical constant .+ nine years later , e. j. williams , in his own work on energy loss , discussed the limitations of fermi s work in the light of proper quantum mechanical calculations of the absorption of a photon by an atom . _ a year later _ , in part iii of his letter to the _ physical review _ , after citing his 1933 approach to collisional energy loss using semi - classical methods , williams said , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` . .practically the same considerations apply to the formula of heitler and sauter for the energy lost by an electron in radiative collisions with an atomic nucleus . c. f. v. weizscker and the writer , in calculations shortly to appear elsewhere ,show that this formula may readily be derived by considering , in a system where the electron is initially at rest , the scattering by the electron of the harmonic components in the fourier spectrum of the perturbing force due to the nucleus ( which , in , is the moving particle ) .the calculations show that practically all the radiative energy loss comes from the scattering of those components with frequencies , and also that heitler and sauter s formula is largely free from the condition , which generally has to be satisfied in order that born s approximation ( used by h and s ) may be valid . ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the virtual quanta of the fields of the nucleus passing an electron in its rest frame are compton - scattered to give bremsstrahlung .hard photons in the lab come from in the rest frame . + weizscker used the equivalent photon spectrum together with the klein - nishina formula for compton scattering to show that the result was identical to the familiar bethe - heitler formula for bremsstrahlung . in a long paper published in 1935 in the proceedings of the danish academy , williams presented a more general discussion,``correlation of certain collision problems with radiation theory , '' with the first reference being to fermi .weizscker and williams exploited special relativity to show that in very high energy radiative processes the dominant energies are always of order of the light particle s rest energy when seen in the appropriate reference frame .the possible failure of quantum electrodynamics at extreme energies , posited by oppenheimer and others , does not occur .the apparent anomalies in the cosmic rays were in fact evidence of then unknown particles ( muons ) .+ fermi started it ; williams obviously knew of fermi s virtual photons ; he and weizsacker chose the right rest frames for relativistic processes .the `` weizscker - williams method of virtual quanta '' continues to have wide and frequent applicability .in 1959 valentine bargmann , louis michel , and valentine telegdi published a short paper on the behavior of the spin polarization of a charged particle with a magnetic moment moving relativistically in fixed , slowly varying ( in space ) electric and magnetic fields .the equation , known colloquially as the bmt equation in a pun on a new york city subway line , finds widespread use in the accelerator physics of high energy electron - positron storage rings .while the authors cite some earlier specialized work , notably by j. frenkel and h. a. kramers , they do not cite the true discoverer and expositor of the general equation . in the april 10 ,1926 issue of _ nature _llewellyn h. thomas published a short letter explaining and eliminating the puzzling factor of two discrepancy between the atomic fine structure and the anomalous zeeman effect , a paper that is cited for what we know as the `` thomas factor '' ( of 1/2 ) .thomas , then at bohr s institute , had listened before christmas 1925 to bohr and kramers arguing over goudsmit and uhlenbeck s proposal that the electron had an intrinsic spin .they concluded that the factor of two discrepancy was the idea s death knell .thomas suggested that a relativistic calculation should be done and did the basic calculation over one christmas weekend in 1925 . he impressed bohr and kramers enough that they urged the letter to _nature_. then thomas elaborated in a detailed 22-page paper that appeared in a january 1927 issue of philosophical magazine and is not widely known .it is this paper that contains all of bmt and more .born in london , england , llewellyn hilleth thomas was educated at the merchant taylor school and cambridge university , where he received his b.a . in 1924 .he began research under the direction of r. h. fowler who promptly went to copenhagen , leaving thomas to his own devices . in recompense fowler arranged for thomas to spend the year 1925 - 26 at bohr s institute where , among other things , he did the celebrated ( and neglected ) work described here . on his return to cambridgehe was elected a fellow of trinity college while still a graduate student .he received his ph.d . in 1927 . + in 1929 thomas emigrated to the us , to ohio state university , where he served for 17 years .notable while at ohio state was his invention in 1938 of the sector - focusing cyclotron , designed to overcome the effects of the relativistic change in the cyclotron frequency . during world warii he worked at the aberdeen proving ground . from 1946 to 1968he was at columbia university and the ibm watson laboratory .there he did research on computing and computers , including invention of a version of the magnetic core memory .he retired from columbia and ibm in 1968 to become university professor at north carolina state university until a second retirement in 1976 .he was a member of the national academy of sciences .valentine bargmann was born and educated in berlin .he attended the university of berlin from 1925 to 1933 . then with the rise of hitler, he moved to the university of zurich under gregor wentzel for his ph.d .soon after , he emigrated to the us .he served as an assistant to albert einstein at the institute for advanced study where he collaborated with peter bergmann . during world warii bargmann worked with john von neumann on shock wave propagation and numerical methods . in 1946he joined the mathematics department at princeton university .there he did research on mathematical physics topics , including the lorentz group , lie groups , scattering theory , and bargmann spaces , collaborating famously with eugene wigner in 1948 on relativistic equations for particles of arbitrary spin , and of course with m and t of bmt .he was awarded several prizes and elected a member of the national academy of sciences .louis michel was born and grew up in roanne , loire , france .he entered ecole polytechnique in 1943 and , after military service , joined the `` corps des poudres , '' a governmental basic and applied research institution , and was assigned back to ecole polytechnique to do cosmic ray research .he was sent to work with blackett in manchester , but in 1948 began theoretical work with leon rosenfeld .he completed his paris ph.d . in 1950 on weak interactions , especially the decay spectrum of electrons from muon decay and showed its dependence ( ignoring the electron s mass ) on only one parameter , known as the `` michel parameter . ''michel spent time in copenhagen in the fledgling cern theory group and at the institute for advanced study in princeton before returning to france .he held positions in lille , orsay , and ecole polytechnique , and finally from 1962 at the institut des hautes etudes scientifiques at bures - sur - yvette . a major part of michel s research concerned spin polarization in fundamental processes , of great importance after the discovery of parity non - conservation in 1957 , with the bmt paper somewhat related .later research spanned strong interactions and g - parity , symmetries and broken symmetries in particle and condensed matter physics , and mathematical tools for crystals and quasi - crystals .michel was a leader of french science , president of the french physical society , member of the french academy of sciences , and recipient of many other honors .although born in budapest , valentine louis telegdi spent his minority moving all around europe with his family , a likely explanation for his fluency in numerous languages .the family was in italy in the 1940s . in 1943they finally found refuge from the war in lausanne , switzerland where telegdi studied at the university . in 1946telegdi began graduate studies in nuclear physics at eth zrich in paul scherrer s group .he came to the university of chicago in the early 1950s .there he exhibited his catholic interests in research .noteworthy was a paper in 1953 with murray gell - mann on charge independence in nuclear photo - processes .his name is associated with a wide variety of important measurements or discoveries : , the ratio of axial - vector to vector coupling in nuclear beta decay ; , measurement of the anomalous magnetic moment of the muon ; regeneration ; muonium , an atomic - like bound state of a positive muon and an electron ; and numerous others . perhaps the best known work is the independent discovery of parity violation in the pion - muon decay chain , published in early 1957 , with jerome friedman . + in 1976 telegdi moved back to switzerland to take up a professorship at eth zrich , with research and advisory roles at cern . in retirementhe spent time each year at caltech and ucsd . among his many honors were memberships in the us national academy of sciences and the royal society , and , in 1991 , co - winner of the wolf prize .+ [ h ] to show the close parallel between thomas s work and the bmt paper 32 years later , we quote significant equations from both in facsimiles of the original notation . in both thomas s 1927 paper and the bmt paper of 1959 the motion of the charged particle is described by the lorentz force equation , with no contribution from the action of the fields on the magnetic moment . in thomas s textthe lorentz force equation reads + + here is the particle s space - time coordinate and is its proper time . in bmt s notationthe equation reads more compactly as + where is the particle s 4-velocity , and now is the proper time . for the spin polarization motion ,we quote first the bmt equation : + \ \ \ \ \ ( 7)\ ] ] + here is the particle s 4-vector of spin angular momentum and is the g - factor of the particle s magnetic moment , . for spinmotion thomas used both a spin 4-vector and an antisymmetric second - rank tensor . here is thomas s equivalent to the spatial part of bmt s spin equation , as he wrote it out explicitly .his is what is usually called the relativistic factor ; his : + \:\right]\ { \bf \times \ w } \ \ \ \ \ \ \ \ \ \ ( 4.121)\end{aligned}\ ] ] thomas then says , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this last is considerably simpler when , when it takes the form + \:\right\}{\bf \times \ w } \ \ \ \ \ \ \ \ \ ( 4.122)\ ] ] in this case in the same approximation , + and + `` the more complicated forms when involving explicitly on the right - hand side can be found easily if required . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ compare thomas s ( 4.123 ) for with the corresponding bmt eqnation , . clearly thomas felt that his explicit general form ( 4.121 ) for arbitary g - factor ( arbitrary ) was more use than a compact 4-vector form such as the bmt equation ( 7 ) . + in 1926 - 27 thomaswas concerned about atomic physics ; his focus was on the `` thomas factor '' in the comparison of the fine structure and the anomalous zeeman effect in hydrogen .bargmann , michel , and telegdi focused on relativistic spin motion and how electromagnetic fields changed transverse polarization into longitudinal polarization and vice versa , with application to the measurement of the muon s g - factor in a storage ring .but it was all in thomas s 1927 paper , 32 years earlier .these five examples from physics illustrate the different ways that inappropriate attributions are given for significant contributions to science .ludvig vladimir lorenz lost out to the homophonous dutchman ( and emil wiechert ) largely because he was in some sense before his time and was a dane who published an appreciable part of his work in danish only .he died in 1891 , just as lorentz was most productive in electromagnetic theory and applications . by 1900lorenz s name had virtually vanished from the literature . recentlya move is underway ( in which the author is a participant ) to recognize lorenz for the `` lorentz '' condition and gauge ; h. a. lorentz has many other achievements attributed justly to him .+ in the physics literature the dirac delta function often goes without attribution , so common is its usage .but when a name is attached , it is dirac s , not heaviside s .the reason , i think , is that in the 1930 s and 1940 s dirac s book on quantum mechanics was a standard and extremely influential . with almost no references in the book, it is not surprising the dirac did not cite heaviside , even though , as an electrical engineer , he surely was aware of heaviside s operational calculus and his impulse function .those learning quantum mechanics then ( and now ) would be only dimly or not at all aware of who was oliver heaviside .scientists look forward , not back ; 35 or 40 years is a lifetime or two .i can hear the voices saying : if some electrical engineer used the same concept in the 19th century , so be it .but our delta function is dirac s .+ schumann resonances are a different case .for some years , now enhanced by the internet , a vocal minority have trumpeted tesla s discovery of the low - frequency electromagnetic resonances in the earth - ionosphere cavity .i believe that claim is incorrect , but tesla was a genius in many ways .it is not surprising that his discussion of resonances around the earth in his 1905 patent might be interpreted by some as a prediction or even discovery of schumann resonances .the interesting aspect is that it was a theoretical physicist , not an electrical engineer , who first discussed the earth - ionosphere cavity , and in an insightful way .fitzgerald , in 1893 , was indeed well before the appropriate time . anda talk to the british association , followed by brief mention in a column in _ nature _ , is not a prominent literature trail for later scientists .schumann may be forgiven for not citing fitzgerald , even though early on heaviside and kennelly addressed the effects of the ionosphere on radio propagation and a number of researchers examined the cavity in the intervening years .i suggest a fitting solution to attribution would be `` schumann - fitzgerald resonances . ''+ the name `` weizscker - williams method ( of virtual quanta ) '' is mainly the fault of the theoretical physics community . certainly , in the mid-1930 s the questions about the failure of qed at high energies were resolved by the work of weizscker and williams , and williams s danish academy paper showed the wide applicability of the method of virtual quanta together with special relativity .but fermi was the first to publish the idea of the equivalence of the fourier spectrum of the fields of a swiftly moving charged particle to a spectrum of photons in their actions on a struck system .williams knew that and so stated .the argument will be made that the choices of appropriate reference frame and struck system are vital to the weizscker - williams method , something fermi did not discuss , but fermi deserves his due .+ the relativistic equation for spin motion in electromagnetic fields is perhaps a narrow topic chiefly of interest to accelerator specialists .it is striking that it was fully developed by thomas at the dawn of quantum mechanics and before the discovery of repetitive particle accelerators such as the cyclotron .he was surely before his time .bargmann , michel , and telegdi were of an other era , with high energy physics a big business .the cuteness of the acronym bmt and the prestige of the authors made searches for prior work superfluous .although the use of `` bmt equation '' is common enough , it is encouraging that in the accelerator physics community the phrase `` thomas - bmt equation '' is now frequently used in research papers and in reviews and handbooks . + the zeroth theorem / arnold s law has some similarities to the `` matthew effect . '' the matthew effect describes how a more prominent researcher will reap all the credit even if a lesser known person has done essentially the same work contemporaneously , or how the most senior researcher in a group effort will get all the recognition , even though all the real work was done by graduate students or postdocs .the zeroth theorem might be considered as the first kind of matthew effect , but with some time delay , although some examples do not fit the prominent / lesser constrain .neither do my examples reflect , as far as i know , the possible influence by the senior researcher or friends to discount or ignore the contributions of others .the zeroth theorem stands on its own , examples often arising because the first enunciator was before his / her time or because the community was not diligent in searching the prior literature before attaching a name to the discovery or relation or effect . +* acknowledgments * + this article is the outgrowth of a talk given at the university of michigan in january 2007 at a symposium honoring gordon l. kane on his 70th birthday .i wish to thank bruno besser for _ his _ citation of my ref .1 ; it drew my attention to e. p. fischer s adroit characterization of the phenomenon described here .i also thank mikhail plyushchay for pointing out arnold s law and michael berry for elaboration .and i thank anne j. kox for reminding me of wiechert s contribution to the story of the lorentz condition .+ for readers wishing to learn more about the scientific work of the neglected , i suggest for ludvig lorentz , an article by helge kragh ; for emil wiechert , an article by joseph f. mulligan ; for oliver heaviside , the book by bruce j. hunt ; and for george f. fitzgerald , hunt s book and fitzgerald s collected works , already cited . for schumann , besser s paper is the obvious source . for the others ,better known , a search in library catalogues or on google will yield results . +this work was supported by the director , office of science , high energy physics , u.s .department of energy under contract no .de - ac02 - 05ch11231 .99 e. p. fischer,``fremdei m gegenteil , '' ( very loosely , `` inappropriate attributions '' ) , _ die welt _ ,m. v. berry,``three laws of discovery , '' in quotations at http://www.phy.bris.ac.uk/people/berry_mv/index.html . v. i. arnold , `` on teaching mathematics , '' russ .* 53 * ( 1 ) , 229 - 236 ( 1998 ) .h. a. lorentz , `` weiterbildung der maxwellischen theorie .elektronentheorie , '' encyklopdie der mathematischen wissenschaften , band v:2 , heft 1 , v.14 , pp .145 - 280 ( 1904 ) .h. a. lorentz , _ theory of electrons _ ( teubner , leipzig / stechert , 1909 ) [ 2nd ed . , 1916 ; reprinted by dover , new york , 1952 ] .l. v. lorenz , `` ber die identitt der schwingungen des lichts mit den elektrischen strmen , '' ann .phys . chem . *131 * , 243 - 263 ( 1867 ) [ `` on the identity of the vibrations of light with electrical currents , '' philos . mag ., ser . 4 , * 34 * , 287 - 301 ( 1867 ) ] .e. wiechert , `` elektrodynamische elementargesetze , '' , archives nerlandaises des sciences exactes et naturelles , ser .2,*5 * , 549 - 573 ( 1900 ) .j. d. jackson and l. b. okun , `` historical roots of gauge invariance , '' rev .mod . phys .* 73 * , 663 - 680 ( 2001 ) , p. 668 . in 1867 , the partial derivative notation was not in use . ordinary derivatives and partial derivatives were distinguished by context .j. c. maxwell , `` on a method of making a direct comparison of electrostatic and electromagnetic force ; with a note on the electromagnetic theory of light , '' philos .london * 158 * , 643 - 657 ( 1868 ) .h. a. lorentz , `` maxwell s elektromagnetische theorie , '' encyklopdie der mathematischen wissenschaften , band v:2 , heft1 , v.13 , pp .63 - 144 ( 1904 ) .p. a. m. dirac , _ the principles of quantum mechanics _ ( oxford university press , 1930 ) ,22 ; sect . 20 ( 2nd ed . , 1935 ) ; sect . 15 ( 3rd ed . , 1947 ) .o. heaviside , electromagnetic theory - lxiv , sect .247 " the electrician,*34 * , 599 - 601 ( 1894 - 95 ) ; reprinted in o. heaviside , _ electromagnetic theory _ , 3 vols . in one ( dover , new york , 1950 ) , sect .249 , p. 133, `` theory of an impulsive current produced by a continued impressed force , '' .b. van der pol and h. bremmer , _ operational calculus _ ( cambridge university press , 1950 ) , sect .use of the height of the ionosphere instead of leads to different modes of much higher frequencies .w. o. schumann , `` ber die strahlungslosen eigenschwingungen einer leitenden kugel , die von luftschicht und einer ionosphrenhlle umgeben ist , '' z. naturforsch . a * 7 * , 149 - 154 ( 1952 ) .b. p. besser , `` synopsis of the historical development of schumann resonances , '' radio science * 42 * rs2s02 pp.20 ( 2007 ) .besser s paper contains an extensive history and a detailed discussion of schumann s work . for just one example of the power spectrum see j. d. jackson , _ classical electrodynamics _( wiley , new york , 1988 ) , fig . 8.9 .n. tesla , u.s .patent no .787,412 ( april 18 , 1905 ) . the full text can be found on the internet at http://freeinternetpress.com / mirrors / tesla_haarp/. g. f. fitzgerald , `` on the period of vibration of electrical disturbances upon the earth , '' rep .assoc . adv .sc . * 63 * , 682 ( 1893 ) [ abstract ] .`` the period of vibration of disturbances of electrification of the earth , '' nature * 48 * , no .( september 28 , 1893 ) ; reprinted in _ the scientific writings of george francis fitzgerald _ , ed .joseph larmor ( hodges , figgis , dublin , 1902 ) , p.301 - 302 . c. f. von weizscker , `` ausstrahlung bei stssen sehr schneller elektronen , '' z. f. physik * 88 * , 612 - 625 ( 1934 ) .e. j. williams , `` nature of the high energy particles of penetrating radiation and status of ionization and radiation formulae , '' phys. rev . * 45 * , 729 - 730 ( l ) ( 1934 ) .e. j. williams , `` correlation of certain collision problems with radiation theory , '' kgl .danske videnskab .xii * , no .4 ( 1935 ) .e. fermi , `` ber die theorie des stosses zwischen atomen und elektrisch geladen teilchen , '' z. f. physik * 29 * , 315 - 327 ( 1924 ) .e. j. williams , `` applications of the method of impact parameter in collisions , '' proc .( london ) * a139 * , 163 - 186 ( 1933 ) .v. bargmann , l. michel , and v. l. telegdi , `` precession of the polarization of particles moving in a homogeneous electromagnetic field , '' phys .* 2 * , 435 - 436 ( 1959 ). l. h. thomas , `` the motion of the spinning electron , '' nature * 117 * , 514 ( april 10 , 1926 ) .l. h. thomas , `` recollections of the discovery of the thomas precessional frequency , '' in _ high energy spin physics - 1982 _ , ed .g. m. bunce ( american institute of physics , new york , 1983 ) , p. 4- 5 . a delightful short reminiscence. l. h. thomas , `` the kinematics of an electron with an axis , '' phil . mag ., ser . 7 , * 3 * , 1 - 22 ( 1927 ) .i put fitzgerald s name second because , after all , he treated only the lowest resonant mode .b. w. montague , `` polarized beams in high energy storage rings , '' phys . rep . * 113 * , 1 - 96 ( 1984 ) .a. w. chao and m. tigner , _ handbook of accelerator physics and engineering _( world scientific , singapore , 1999 ) .r. k. merton , `` the matthew effect in science , '' science * 159 * ( 3810 ) 56 - 63 ( 1968 ) ; `` the matthew effect in science , ii , '' isis * 79*,606 - 623 ( 1988 ) .h. kragh , `` ludvig lorenz and nineteenth century optical theory : the work of a great danish scientist , '' appl .optics * 30 * , 4688 - 4695 ( 1991 ) . j. e. mulligan , `` emil wiechert ( 1861 - 1928 ) : esteemed seismologist , forgotten physicist , '' am .* 69 * , 277 - 287 ( 2001 ). b. j. hunt , _ the maxwellians _ ( cornell , ithaca , 1991 ) .
the zeroth theorem of the history of science ( enunciated by e. p. fischer ) and widely known in the mathematics community as arnold s principle ( decreed by m. v. berry ) , states that a discovery ( rule , regularity , insight ) named after someone ( often ) did not originate with that person . i present five examples from physics : the lorentz condition defining the lorentz gauge of the electromagnetic potentials ; the dirac delta function ; the schumann resonances of the earth - ionosphere cavity ; the weizscker - williams method of virtual quanta ; the bmt equation of spin dynamics . i give illustrated thumbnail sketches of both the true and reputed discoverers and quote from their `` discovery '' publications .
processes in which proteins assemble on membranes to drive topology changes are ubiquitous in biology . despite extensive experimental and theoretical investigations ( e.g. ) , how assembly - driven membrane deformation depends on protein properties , membrane properties , and membrane compositional inhomogeneity remains incompletely understood .an important example of this phenomenon occurs during the formation of an enveloped virus , when the virion acquires a membrane envelope by budding from its host cell .budding is typically driven at least in part by assembly of capsid proteins or viral membrane proteins , and many enveloped viruses , including hiv and influenza , preferentially bud from membrane microdomains ( e.g. lipid rafts ) .understanding how viruses exploit membrane domain structures to facilitate budding would reveal fundamental aspects of the viral lifecycle , and could focus efforts to identify targets for new antiviral drugs that interfere with budding .furthermore , there is much interest in developing enveloped viral nanoparticles as targeted transport vehicles equipped to cross cell membranes through fusion . more generally ,identifying the factors that make viral budding robust will shed light on other biological processes in which high - order complexes assemble to reshape membranes . toward this goal, we perform dynamical simulations in which capsids simultaneously assemble and bud from model lipid membranes .we identify mechanisms by which membrane adsorption either promotes or impedes assembly , and we find multiple mechanisms by which a membrane microdomain significantly enhances assembly and budding . enveloped viruses can be divided into two groups based on how they acquire their lipid membrane envelope . for the first group , which includes influenza and type c retroviruses ( e.g. hiv ) ,the ( immature ) nucleocapsid core assembles on the membrane concomitant with budding . in the second group ,a core assembles in the cytoplasm prior to envelopment ( reviewed in ) . in many families from this group ( e.g. alphavirus )envelopment is driven by assembly of viral transmembrane glycoproteins around the core . for all enveloped viruses ,membrane deformation is driven at least in part by a combination of weak protein - protein and protein - lipid interactions .thus , properties of the membrane should substantially affect budding and assembly timescales .in support of this hypothesis , many viruses from both groups preferentially bud from membrane microdomains 10 - 100 nm in size that are concentrated with cholesterol and/or sphingolipids .a critical question is whether viruses utilize microdomains primarily to concentrate capsid proteins or other molecules , or if the geometric and physical properties of domains facilitate budding .answering these questions through experiments alone has been challenging .extensive previous theoretical investigations have studied budding by pre - assembled cores or nanoparticles ( e.g. ) , budding triggered by non - assembling subunits , or used a continuum model to study assembly and budding .most closely related to our work , matthews and likos recently performed simulations on a coarse - grained model of patchy colloidal particles assembling on a membrane represented as a triangulated surface .these elegant simulations provided a first look at the process of simultaneous assembly and budding and showed that subunit adsorption onto a membrane facilitates assembly through dimensional reduction . here , we perform dynamical simulations on a model which more closely captures the essential geometric features of capsid subunits and lipid bilayers , and we explore how the presence of a microdomain within the membrane can influence assembly and budding .our simulations show that , while the membrane can promote assembly of partial capsids , membrane deformations can introduce barriers that hinder completion of assembly .we find that a microdomain within a certain size range favors membrane deformations that diminish these barriers , and thus can play a key role in enabling complete assembly and budding .furthermore , our simulations suggest that assembly morphologies depend crucially on multiple timescales , including those of protein - protein association , membrane deformation , and protein adsorption onto the membrane .finally , we discuss potential effects of simplifications in our coarse - grained model and how a key prediction from the simulations can be tested in an _ in vitro _ assay . .* ( b ) * a slice of the membrane and the entire capsid are shown during budding , with the capsomer - lipid interaction sites colored green , and the domain lipids colored purple . * ( c ) * a homogeneous membrane patch , with blue and cyan beads representing the lipid heads and lipid tails respectively . *( d ) * a two - phase membrane , with red and orange beads representing the domain lipid heads and tails respectively . images were generated using vmd .,scaledwidth=48.0% ]due to the large length and time scales associated with assembly of a capsid , simulating the process with an all - atom model is well beyond the capabilities of current computers .therefore , in this paper we aim to elucidate the principles underlying simultaneous assembly and budding by considering a simplified geometric model for capsid proteins , inspired by previous simulations of empty capsid assembly and assembly around nucleic acids .similarly , we consider a simplified model for lipids which recapitulates the material properties of biological membranes .the membrane is represented by the model from cooke and deserno , in which each amphiphile is represented by one head bead and two tail beads connected by fene bonds ( fig .[ fig : model]c ) .this is an implicit solvent model ; hydrophobic forces responsible for the formation of bilayers are mimicked by attractive interactions between tail beads with interaction strength .this model enables computational feasibility while allowing the formation of bilayers with physical properties such as fluidity , diffusivity , and rigidity that are easily tuned across the range of values measured in biological membranes .the bead diameter is set to nm to obtain bilayers with widths of 5 nm and the lipid - lipid interaction strength is set to to obtain fluid membranes with bending modulus . when studying the effect of a domain , we consider two types of lipids , with _ m _ and _ d _ referring respectively to the lipids outside and inside of the domain , and tail - tail interaction parameters ( eq .[ eq : hydrophobicity ] in supporting information ) set to , while is a variable parameter that controls the line tension of the domain , . varying from 0 to tunes the line tension from to 0 ( si sec .[ sec : raft_model ] ) .further details for the membrane model are provided in si sec .[ sec : raft_model ] .we modified and extended a model for assembly of non - enveloped capsids to describe assembly on a membrane .a complete listing of the interaction potentials is provided in si section [ sec : capsomer_model ] ; we summarize them here .the capsid subunit is a rigid body with a pentagonal base and radius of formed by 15 attractive and 10 repulsive interaction sites ( fig .[ fig : model ] a ) .subunit assembly is mediated through a morse potential between ` attractor ' pseudoatoms located in the pentagon plane , with one located at each subunit vertex and 2 along each edge .attractions occur between like attractors only , meaning that there are vertex - vertex and edge - edge attractions , but no vertex - edge attractor interactions .the 10 repulsive interaction sites are arranged symmetrically above and below the pentagon plane , so as to favor a subunit - subunit angle consistent with a dodecahedron ( 116 degrees ) .the motivation for our modifications to the model are described in si section [ sec : capsomer_model ] .the potential between capsomers and lipids accounts for attractive interactions and excluded - volume .first , we add to the capsomer body six attractor pseudoatoms that have attractive interactions with lipid tail beads .when simulating a phase - separated membrane , the attractors interact only with the domain lipid tails ( fig . [ fig : model]b ) .the attractors are placed one above each vertex and one above the center of the pentagon , each located a distance of above the pentagon plane ( fig .[ fig : model]a ) .these are motivated by , e.g. , the myristate group on retrovirus gag proteins that promotes subunit adsorption by inserting into the lipid bilayer .the attractor - tail interaction is the same form as the lipid tail - tail interaction except that there is no repulsive component ( si eq .( [ eq : adh_hydrophobicity ] ) ) .it is parameterized by the interaction strength , , which tunes the adhesion energy per capsomer according to with ( si sec .[ sec : adhesion_energy ] ) . to account for capsomer - lipid excluded - volume interactions , a layer of 35 ` excluder ' beads , each with diameter , is placed in the pentagon plane ( fig .[ fig : model]a ) .excluders experience repulsive interactions with all lipid beads .since the mean location of the attractive interaction sites on adsorbed subunits is near the membrane midplane , the effective radius of the assembled capsid ( not including the lipid coat ) can be estimated from the distance between the attractors and the capsomer plane plus the capsid inradius ( the radius of a sphere inscribed in a dodecahedron ) , which gives . as discussed below ,this is smaller than any enveloped virus , and thus our results are qualitative in nature . in this workwe are motivated by viruses such as hiv , where expression of the capsid protein ( gag ) alone is sufficient for the formation of budded particles .therefore we consider a model which does not include viral transmembrane proteins ( spike proteins ) .we also do not consider how some viruses use cellular machinery to drive scission as this process is virus - specific and depends on detailed properties of cellular proteins .for those viruses our model may elucidate the mechanisms leading up to the point of scission .[ sec : simulations ] simulations were performed on gpus with a modified version hoomd 0.10.1 . we modified the andersen barostat implementation to simulate the membrane at constant temperature and constant tension and to couple the barostat to rigid - body dynamics .the membrane was coupled to the thermostat and barostat with characteristic times and respectively , with the characteristic diffusion time for a lipid bead ( defined below ) .the imposed tension was set to zero .each capsomer was simulated as a rigid body using the brownian dynamics algorithm , which uses the ( non - overdamped ) langevin equation to evolve positions and rigid body orientations in time . to approximate the rotational dynamics of globular proteins, we modified the rigid - body algorithm in hoomd so that forces and torques arising from drag and random buffeting were applied separately and isotropically .finally , the code was modified to update rigid - body positions according to changes in the box size generated by the barostat at each time step .matthews and likos showed that hydrodynamic interactions ( hi ) between lipid particles can increase the rate of membrane deformation .however , given that the mechanisms of assembly and budding appeared to be similar in simulations which did not include hi , the timescales for protein diffusion and association are only qualitative in a coarse - grained model , and the large computational cost required to include hi in our more detailed model , we neglect hi in our simulations ._ we set the units of energy , length , and time in our simulations equal to the characteristic energy , size , and diffusion time for a lipid bead : , and respectively .the remaining parameters can be assigned physical values by setting the system to room temperature , , and noting that the typical width of a lipid bilayer is around 5 nm , and the mass of a typical phospholipid is about 660 g / mol .the units of our system can then be assigned as follows : nm , g / mol , , and ps .for each set of parameters , the results from four independent simulations were averaged to estimate the mean behavior of the system ._ timescales ._ the diffusion coefficient of capsomers in solution is while for capsomers adsorbed on the membrane ] on the membrane ._ to simulate an infinite membrane , periodic boundary conditions were employed for the lateral dimensions and a wall was placed at the bottom of the box .thus , the capsomers remained below the membrane unless they budded through it . to maintain a constant and equal ideal gas pressure above and below the membrane ( despite the imbalance of capsomer concentrations ) , ` phantom ' particles were added to the system .these particles experienced excluded - volume interactions with the lipid head beads , and no other interactions . for most simulations of inhomogeneous membranesthe membrane contained lipids , including those belonging to the domain .an initial bilayer configuration was equilibrated and then placed with its normal along the -axis in a cubic box of side - length and . for large domains ( )the membrane contained lipids and the initial box size was .for most simulations of homogeneous membranes the bilayer contained lipids and the initial box size was ; additional simulations on larger membranes were performed to rule out finite size effects .the capsomers were introduced in the box in two different ways , to understand how the rate of subunit translation and/or targeting to the membrane affects assembly .the first set of simulations considered budding via quasi - equilibrium states , meaning that capsid proteins adsorb onto the membrane slowly in comparison to assembly and membrane deformation timescales .this scenario corresponds to the limit of low subunit concentration and a rate of subunit protein translation or targeting of subunits to the membrane which is slow in comparison to assembly .specifically , each new capsomer was injected around 50 below the membrane midplane once all previously injected subunits were part of the same cluster . for other simulations ,capsomers were injected one - by - one with an interval until reaching a predefined maximum number of subunits . in the limit of ,all capsomers were placed randomly at distances between 30 and 50 below the membrane at the beginning of the simulation .for all simulations , the initial configuration had three free capsomers placed at 30 below the membrane .to simulate capsid protein and membrane dynamics on time- and length - scales relevant to assembly and budding , we use the models illustrated in fig . [fig : model]a , b .the physical mechanisms that control the formation and size of domains ( with a typical size of 10 - 100 nm ) in cell membranes are poorly understood . to focus on the effect of a domain on assembly rather than its formation, we simulate a minimal heterogeneous membrane comprised of two lipid species , with interaction strengths that lead to phase separation within the membrane , with the minor species forming a circular domain ( fig .[ fig : model]d ) .the bulk membrane and domain have the same bending coefficient and area per lipid ( to focus on mechanisms other than curvature- or bending stiffness - sorting ) , but protein subunits preferentially partition into the domain . a complete listing of the interaction potentials is provided in methods and si sec . [ sec : capsomer_model ] .we performed simulations for a range of subunit - membrane interaction strengths , microdomain sizes , microdomain line tensions , and timescales for subunit association to the membrane .all simulations were performed with a subunit - subunit interaction strength , . while assembly can proceed in bulk under these conditions , in all simulations that we performed ( for all values of ) subunits adsorbed onto the membrane before assembling into any oligomer larger than a trimer .this behavior is consistent with enveloped viruses for which assembly in the cytosol is limited to small oligomers ( e.g. hiv ) .the results presented here correspond to long but finite simulation times , at which point assembly outcomes appeared roughly independent of increasing simulation time .while these results need not necessarily correspond to equilibrium configurations , note that capsid assembly must proceed within finite timescales in _ in vivo _ or _ in vitro _ settings as well . .* ( a ) , ( b ) * assembled but partially wrapped capsids for ( a ) and ( b ) . *( c ) * assembly stalls at a half capsid for . *( d ) * a deformed , open structure forms for . , scaledwidth=48.0% ] given that capsid proteins may be targeted to the membrane rather than arriving by diffusion , we have considered several modes of introducing subunits into our simulated system , as described in methods .we began by simulating assembly on a homogeneous membrane ( a single species of lipid ) ( fig .[ fig : model]c ) via quasi - equilibrium states , meaning that free subunits were injected into the system far from the membrane one - by - one , each after all previously injected subunits were assembled ( see methods ) .this scenario corresponds to the limit of low subunit concentration and a rate of subunit protein translation or targeting of subunits to the membrane which is slow in comparison to assembly .we found that assembly of membrane - absorbed subunits required large subunit - subunit interactions ( as compared to those required for assembly in bulk solution ) , but that such subunits could undergo rapid nucleation on the membrane .however , we found no sets of parameter values for which our model undergoes complete assembly and budding on a homogeneous membrane . in most simulations , assembly slows dramatically after formation of a half - capsid ( six subunits ) .the nature of subsequent assembly depends on the adhesion strength. for low adhesion strengths ( ) , assembly beyond a half - capsid occurs when particles detach from the membrane , sometimes leading to nearly completely assembled but partially wrapped capsids ( fig .[ fig : noraft ] a , b ) . at intermediate adhesion strengths ( ) particlesdo not readily dissociate from the membrane and assembly typically stalls at a half - capsids .higher adhesion strengths ( ) yield deformed , open structures which can not drive complete budding ( fig . [fig : noraft]d ) .these results reveal that adsorption to a membrane has mixed effects on assembly . through dimensionalreduction , membrane adsorption reduces the search space and thus can promote subunit - subunit collisions .furthermore , as shown in matthews et al , adsorption to the membrane can lead to high local subunit concentrations and thus reduce nucleation barriers .similar effects occur during assembly on a polymer .however , assembly on the membrane also introduces new barriers to assembly .first , formation of a completely enveloped capsid incurs a membrane bending free energy cost of , independent of capsid size .this free energy penalty must be compensated by subunit - subunit and subunit - membrane interactions . in our modelthe subunit - membrane interactions do not promote membrane curvature , and thus large subunit - subunit interactions were required for assembly on the membrane. for these parameters nucleation also occurs in bulk solution if there is no membrane present ( nucleation did not occur in bulk solution with a membrane present for any value of because subunits adsorbed onto the membrane before undergoing nucleation ) .we also considered a model in which the surface of the subunit is curved ( si fig .[ fig : curved_capsomer ] ) , so that subunit - membrane adsorption does promote local curvature .interestingly , this model did not lead to improved assembly as compared to the flat subunits .this surprising result and the frustrated assembly dynamics of half - capsid intermediates reveal additional kinetic and free energetic barriers to assembly , which are geometric in origin . for intermediates below half - size ,the capsid - induced membrane curvature is positive everywhere , and further assembly requires only a small change in the angle of an approaching adsorbed subunit . on the other hand , assembly beyondhalf - size induces a neck characterized by negative curvature ( si fig .[ fig : geom_assembly]a ) ; consequently , subunits approach the assembling partial capsid with orientations that are not conducive to association .addition of such a subunit requires a large membrane deformation , which is energetically unfavorable for physically relevant values of the membrane bending rigidity and thus rare ( si fig .[ fig : geom_assembly]b ) .assembly therefore stalls or , in the case of weak adhesion energy , proceeds by detachment of subunits from the membrane leading to assembled but partially wrapped capsids .the stalled assembly states resemble the partially assembled states theoretically predicted by zhang and nguyen , while the partially wrapped capsids are consistent with the metastable partially wrapped states found for a pre - assembled particle in our previous simulations .a second barrier arises because subunit - membrane interaction energies are reduced in regions where the membrane curvature is large on the length scale of the rigid subunit .this effect introduces a barrier to subunit diffusion across the neck ( see the animation si video [ fig : movie ] ) , thus decreasing the flux of subunits to the assembling capsid . as discussed below, the large magnitude of the membrane - induced barrier to assembly arises in part due to the small capsid size and relatively large subunits of our model .however , the barrier is intrinsic to assembly of spherical or convex polygonal structure on a deformable two - dimensional manifold and thus will exist for any such model . , and .the membrane wraps the growing capsid * ( a - d ) * until the complete , enveloped capsid is connected to the rest of the membrane by a narrow neck * ( e)*. finally , thermal fluctuations lead to fusion of the neck and the encapsulated capsid escapes from the membrane * ( f)*. , scaledwidth=45.0% ] we next simulated assembly in the presence of a phase - separated membrane ( fig .[ fig : model]d ) to understand the effects of a membrane domain on assembly and budding . while there is some evidence that capsid proteins may induce the formation of lipid rafts , the mechanisms of lipid raft formation remain controversial . here , we focus on the effect that the presence of a domain can exert on assembly and budding . we emphasize that we consider lipid - lipid interaction parameters for which the domain is flat and stable in the absence of capsid subunits ( see si fig . [fig : line_tension]b ) ; i.e. , the domain line tension is insufficient to drive budding .we first consider budding in the quasi - equilibrium limit . _effect of line tension and adhesion energy ._ figure [ fig : phase_diag1 ] ( left ) shows the predominant final system configurations as a function of and for fixed domain size , which is nearly twice the area required to wrap the capsid .moderate adhesion strengths and small line tensions lead to complete assembly and budding ( fig .[ fig : budding ] ) , meaning that : 12 subunits form a complete capsid , the capsid is completely wrapped by the membrane , and the membrane undergoes scission through spontaneous fusion of the neck to release the membrane - enveloped capsid .because it requires a relatively large thermal fluctuation , scission is characterized by long time scales .after scission , the portion of the domain not enveloping the capsid remains within the membrane .analysis of simulation trajectories identified several mechanisms by which the domain facilitates assembly .firstly , partitioning of adsorbed proteins into the domain generates a high local subunit concentration .secondly , the domain line tension promotes membrane curvature , since this reduces the length of the domain interface . while this effect is not sufficient to drive domain curvature on an empty membrane for the parameters we consider , it facilitates membrane curvature around a partial capsid within the domain .while the first two mechanisms could be anticipated based on existing theoretical knowledge , the simulations also identified a third mechanism that we did not anticipate .namely , domains with sizes of order 2 - 4 times the area of a wrapped capsid promote long , shallow necks around assembly intermediates .while curvature energy favors capsid assembly in the domain interior , the line tension is minimized by a neck which extends to the domain interface .the relatively shallow curvature of such a neck greatly reduces the thermodynamic and kinetic barriers to assembly discussed in the previous section .subunits diffuse readily across a long neck , and subsequent attachment to the assembling capsid incurs relatively small membrane deformation energies .the influence of the neck on subunit diffusion and association is illustrated by animations from assembly trajectories in si video [ fig : movie ] . outside of optimal parameter values ,we observe five classes of alternative end products . _( i ) _ for large values of the line tension , formation of a partial capsid triggers budding of the entire domain before assembly completes . under these parameters ,the interfacial energy provides a driving force for budding of the entire domain , which is balanced by curvature energy in the absence of assembly .however , once the assembly of a partial capsid induces sufficient membrane curvature , the interfacial energy dominates and the domain buds . within this region , the number of subunits found within the budded domain and the threshold value of the surface tension required for entire - domain budding increase with .this trend arises because stronger subunit - membrane adhesion leads to tight wrapping of intermediates and thus larger assemblages are required for the induced curvature to propagate to the domain interface . _( ii ) _ for small and , the capsid assembles but wrapping is incomplete . herethe subunit - membrane adhesion energy is insufficient to compensate for the membrane bending energy cost associated with wrapping . _( iii ) _ for larger - than - optimal adhesion strengths , the membrane wraps the assembling capsid tightly with a short neck .as discussed in the previous section , the high negative curvature associated with a short neck inhibits association of the final subunit leading to stalled , incomplete assembly . _( iv ) _ for large , subunit - membrane adhesion energy dominates over subunit - subunit interactions leading to mis - assembled structures . finally , _ ( v ) _ at other domain sizes ( fig .[ fig : phase_diag1 ] right ) we observe configurations in which the capsid is completely wrapped , but the neck does not undergo fusion . to illustrate the timescales , interactions , and coupling between assembly and membrane configurations , the total subunit - subunit attractive interaction energy and the magnitude of membrane deformationare plotted as a function of time for a trajectory leading to each type of outcome in fig .[ fig : trajectories ] . , ( bottom ) as a function of time for a trajectory leading to each type of outcome described in the main text .the capsid penetration is measured as the distance between the top of the capsid and the center of mass of the membrane .the color code represents the outcome type and follows the same format as in fig .[ fig : phase_diag1 ] : successful assembly ( green ) , budding of a partial capsid ( yellow ) , complete assembly but incomplete wrapping ( orange ) , stalled assembly with wrapping ( red ) and malformed assembly ( violet).,scaledwidth=50.0% ] _ effect of domain size . _the dependence of assembly and budding on the domain radius for constant line tension is shown in figure [ fig : phase_diag1 ] ( right ) .there is an optimal domain size with about twice the area of a wrapped capsid ( ) that leads to robust assembly and budding over a broad range of adhesion energies . for smaller domains , low values of adhesion lead to budding of the entire domain before assembly completes .this result was unexpected in the absence of protein assembly , line tension triggers budding _ above _ a threshold domain size ; smaller domains are stable because bending energy dominates over interfacial energy .however , we find here that partial capsid intermediates stabilize membrane deformation over an area proportional to their size , and thus drive budding within domains _ below _ a threshold size . on the other hand , for larger than optimal domainsthe assembling capsid only deforms a fraction of the domain , and the domain interface does not promote a long neck or curvature around the capsid .the behavior of such a domain is therefore comparable to that in a homogeneous membrane . andthe adhesion strength are shown for a domain with and .the most frequent outcome is shown for every set of parameters .symbols are defined as in figure [ fig : phase_diag1 ] except for symbols , which denote budding of the whole domain with a malformed capsid inside .alternative outcomes observed at some parameter sets are documented in si fig .[ fig : phase_diagdyn_asterisks].,scaledwidth=45.0% ] _ effect of subunit adsorption timescale ._ in the quasi - equilibrium simulations discussed so far , the assembly outcomes were determined by the relative timescales of membrane deformation and partial capsid annealing . to determine the effect of the subunit adsorption timescale, we characterized the system behavior for subunit injection timescales ( see methods ) between the quasi - equilibrium limit and 0 , where all subunits were introduced at the inception of the simulation ( fig .[ fig : phase_diagdyn ] ) .we set , which led to relatively robust budding in the quasi - equilibrium limit .the predominant end products are shown as a function of the adhesion strength and the subunit injection timescale in figure [ fig : phase_diagdyn ] .we see that the qualitative behavior is independent of the injection timescale ; for all injection rates there is range of intermediate adhesion strengths around for which complete assembly and budding is observed .however , as the injection timescale decreases , both the lower and upper bounds of this optimal range shift to weaker adhesion energies . at weak adhesion energiesthe increased frequency of subunit binding promotes complete assembly and budding by reducing the overall assembly timescale below the timescale for budding of the entire domain .stronger - than - optimal adhesion energies tend to result in malformed assemblages ( si fig .[ fig : kinetic_traps]b ) at the lower injection timescales .this result can be understood from previous studies of assembly into empty capsids or around polymers higher adhesion energies lead to an exponential increase in the timescale for annealing of partial capsid configurations ; kinetic traps occur when annealing timescales exceed the subunit binding timescale .the ultimate fate of these large aggregates depends on the adhesion energy . for smaller - than - optimal adhesion energies ,assemblages are loosely wrapped and the entire domain undergoes budding once the assemblage reaches a threshold size ( e.g. si fig .[ fig : kinetic_traps]b ) . for larger ,malformed aggregates are tightly wrapped by the membrane and remain attached by a neck ( e.g. si fig .[ fig : kinetic_traps]a ) . the shortest injection timescales and largest adhesion energies we investigated lead to large flat aggregates that do not bend the membrane ( fig .[ fig : kinetic_traps]c ) , or partial capsids emerging from a flat aggregate ( si fig .[ fig : kinetic_traps]d ) .finally , we note that as the subunit injection timescale is decreased , the diversity of outcomes at a given parameter set increases and thus the yield of budded well - formed capsids decreases ( si fig .[ fig : phase_diagdyn_asterisks ] ) ._ effect of subunit copy number ._ we found that the dynamics is qualitatively similar when excess subunits are included in the simulation .for example , we performed simulations on systems with with 19 capsomers , about 60% more than needed for capsid formation . for an injection timescale of ,the behavior is similar to the small results discussed above , except that subunits on the periphery of an assembling capsid typically form flat aggregates that can hinder budding ( si fig .[ fig : budplenty ] ) .for adhesion strengths between 0.3 to 0.4 budding is observed ( si fig .[ fig : budplenty ] ) , whereas larger values of lead to the forms of kinetic traps discussed above .our simulations demonstrate that , while a fluctuating membrane can promote assembly through dimensional reduction , it also introduces barriers to assembly by limiting the diffusion and orientational fluctuations of adsorbed subunits .these barriers , which are not present for assembly in bulk solution , can engender metastable partially assembled or partially budded structures . while barrier heights may depend on the specific membrane and protein properties ( see below ) , their existence is generic to the assembly of a curved structure on a deformable surface .we find that assembly from a membrane microdomain can substantially reduce the effect of these barriers , which could partly account for the prevalence of enveloped viruses that preferentially bud from lipid rafts or other membrane microdomains . as a first exploration of the relationship between membrane domain structure and budding, we considered a minimal model for a microdomain , which accounts only for preferential partitioning or targeting of capsid proteins within the domain .our simulations identified two effects by which such a domain can promote assembly and budding : ( i ) generating a high local concentration of adsorbed subunits and ( ii ) decreasing membrane - associated barriers to assembly by lengthening the neck around the budding capsid .while the first effect could be anticipated from standard reaction diffusion analysis , the second arises from the complex interplay between domain line tension and the geometry of a bud .importantly , the predicted effects are sensitive to the domain size ( fig . [fig : phase_diag1 ] ) , with an optimal domain size on the order of 2 - 3 times the area of a wrapped capsid .smaller domains lead to budding before completion of assembly , whereas facilitation of budding becomes ineffective when the domain radius becomes large in comparison to the capsid size .importantly , these predictions can be directly tested by _ in vitro _experiments in which capsid proteins assemble and bud from artificial phospholipid vesicles with domains of varying sizes .finally , we consider the limitations of the model studied here .although extending the model to relax these limitations is beyond the scope of the present work , doing so in future investigations could further elucidate the factors that control assembly and budding .the effective diameter of our enveloped capsid is about 28 nm , while the smallest enveloped viruses found in nature have diameters of 40 - 50 nm ( e.g. ) . although the relationship between particle size and budding has been explored in detail for preassembled nucleocapsids or nanoparticles ( e.g. ) , our simulations here have identified new factors that control simultaneous assembly and budding . during assembly of a larger capsid , each subunit would individually comprise a smaller fraction of the total capsid area and thus would incur a smaller increment of membrane deformation energy when associating with the capsid .similarly , intra - subunit degrees of freedom could allow subunit distortions that would facilitate diffusion across the neck .however , note that such distortions would be free energetically unfavorable and thus would still impose a free energy barrier .we also note that the potential used for the subunit - membrane interaction in this work does not represent local distortions of the lipid hydrophobic tails resulting from insertion of a hydrophobic group .these distortions would most likely promote local negative curvature and thus might inhibit formation of membrane curvature ; however , they could induce membrane - mediated attractions between subunits .given the qualitative nature of subunit - subunit interactions in our model , we do not expect these effects to qualitatively change the results . while our model demonstrates three mechanisms by which a domain can promote membrane deformation , the effect of lipid and protein compositions within microdomains on membrane bending rigidity and spontaneous curvature could have additional effects .similarly , for some viruses important roles are played by recruitment of additional viral proteins , other cellular factors that create or support membrane curvature , and cytoskeletal machinery that actively drives budding ( e.g. ) .while these results can be systematically incorporated into the model , our current simulations provide an essential starting point to understand how microdomains facilitate budding and , through comparison with experiments , to identify the critical steps in budding .this work was supported by award number r01gm108021 from the national institute of general medical sciences , nsf - mrsec-0820492 , modelico grant ( s2009/esp-1691 ) from comunidad autnoma de madrid , and fis2010 - 22047-c05 - 01 grant from ministerio de ciencia e innovacin de espaa .computational resources were provided by the national science foundation through xsede computing resources ( longhorn and keeneland ) and the brandeis hpcc .supporting material is available for this article .references appear in the supporting material . 83 [ 1]`#1 ` baumgart , t. , b. r. capraro , c. zhu , and s. l. das , 2011 .thermodynamics and mechanics of membrane curvature generation and sensing by proteins and lipids . __ 62:483506 .krauss , m. , and v. haucke , 2011 .shaping membranes for endocytosis ._ in _ s. g. amara , e. bamberg , b. k. fleischmann , t. gudermann , r. jahn , w. j. lederer , r. lill , b. nilius , and s. offermanns , editors , reviews of physiology , biochemistry and pharmacology , springer berlin heidelberg , volume 161 of _ reviews of physiology , biochemistry and pharmacology _ , 4566 .sundquist , w. i. , and h .-krusslich , 2012 ._ cold spring harbor perspectives in medicine _ 2:a006924 .http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3385941&tool=pmcentrez&rendertype=abstract .hurley , j. h. , e. boura , l .- a .carlson , and b. rycki , 2010 . _ cell _ 143:875887 . http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3102176&tool=pmcentrez&rendertype=abstract .welsch , s. , b. mller , and h .-krusslich , 2007 ._ febs lett .. http://www.febsletters.org/article/s0014-5793(07)00314-6/abstract .solon , j. , o. gareil , p. bassereau , and y. gaudin , 2005 . _ the journal of general virology _ 86:33573363 .http://vir.sgmjournals.org/content/86/12/3357.full .vennema , h. , g. j. godeke , j. w. rossen , w. f. voorhout , m. c. horzinek , d. j. opstelten , and p. j. rottier , 1996 . _ the embo journal _ 15:20208 . http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=450121&tool=pmcentrez&rendertype=abstract .garoff , h. , r. hewson , and d .- j .e. opstelten , 1998 . ._ microbiol ._ 62:11711190 .waheed , a. a. , and e. o. freed , 2010 ._ viruses _ 2:11461180 .http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2927015&tool=pmcentrez&rendertype=abstract .rossman , j. s. , and r. a. lamb , 2011 ._ virology _ 411:229236 .http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3086653&tool=pmcentrez&rendertype=abstract .lundstrom , k. , 2009 . ._ viruses _ 1:1325 . http://www.mdpi.com/1999-4915/1/1/13 .cheng , f. , i. b. tsvetkova , y .- l .khuong , a. w. moore , r. j. arnold , n. l. goicochea , b. dragnea , and s. mukhopadhyay , 2013 ._ 10:518 .http://dx.doi.org/10.1021/mp3002667 .rowan , k. , 2010 ._ j. natl .cancer inst ._ 102:5905 .http://www.ncbi.nlm.nih.gov/pubmed/20421567 .garoff , h. , m. sjberg , and r. h. cheng , 2004 ._ virus res . _ 106:103116 .http://dx.doi.org/10.1016/j.virusres.2004.08.008 .ruiz - herrero , t. , e. velasco , and m. f. hagan , 2012 ._ j. phys .b _ 116:9595603 . http://dx.doi.org/10.1021/jp301601g .chaudhuri , a. , g. battaglia , and r. golestanian , 2011 ._ 8:046002 .http://stacks.iop.org/1478-3975/8/i=4/a=046002 .deserno , m. , and w. m. gelbart , 2002 . ._ j. phys .b _ 106:55435552 . http://dx.doi.org/10.1021/jp0138476 .fonari , m. , a. igli , d. m. kroll , and s. may , 2009 . ._ j. chem ._ 131:105103 ./pmc / articles / pmc2766406/?report = abstract .ginzburg , v. v. , and s. balijepalli , 2007 ._ nano lett ._ 7:37163722 .http://dx.doi.org/10.1021/nl072053l .jiang , w. , b. y. s. kim , j. t. rutka , and w. c. w. chan , 2008 ._ nature nanotechnology _ 3:14550 . http://dx.doi.org/10.1038/nnano.2008.30 .li , x. , and d. xing , 2010 . ._ 97:153704 .http://link.aip.org/link/?applab/97/153704/1 .li , y. , and n. gu , 2010 ._ j. phys .114:274954 . http://dx.doi.org/10.1021/jp904550b .smith , k. a. , d. jasnow , and a. c. balazs , 2007 . _ j. chem ._ 127:84703 . http://www.ncbi.nlm.nih.gov/pubmed/17764280 .tzlil , s. , m. deserno , w. m. gelbart , and a. ben - shaul , 2004 ._ biophys .http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1304057&tool=pmcentrez&rendertype=abstract .vcha , r. , f. j. martinez - veracoechea , and d. frenkel , 2011 ._ nano lett ._ 11:53915 .http://dx.doi.org/10.1021/nl2030213 .yang , k. , and y .- q .ma , 2011 . ._ aust . j. chem . _ 64:894 .http://www.publish.csiro.au/view/journals/dsp_journal_fulltext.cfm?nid=51&f=ch11053 .reynwar , b. j. , g. illya , v. a. harmandaris , m. m. mller , k. kremer , and m. deserno , 2007 ._ nature _ 447:461464 . http://dx.doi.org/10.1038/nature05840 .zhang , r. , and t. nguyen , 2008 . .http://pre.aps.org/abstract/pre/v78/i5/e051903 .matthews , r. , and c. likos , 2012 . ._ 109:178302. http://link.aps.org/doi/10.1103/physrevlett.109.178302 .matthews , r. , and c. n. likos , 2013 .structures and pathways for clathrin self - assembly in the bulk and on membranes ._ soft matter _ 9:57945806. matthews , r. , and c. n. likos , 2013 .dynamics of self - assembly of model viral capsids in the presence of a fluctuating membrane ._ the journal of physical chemistry b _ .humphrey , w. , a. dalke , and k. schulten , 1996 ._ 14:338 , 278 . http://www.ncbi.nlm.nih.gov/pubmed/8744570 .freddolino , p. l. , a. s. arkhipov , s. b. larson , a. mcpherson , and k. schulten , 2006 ._ structure ( london , england : 1993 ) _ 14:43749 . http://www.cell.com/structure/fulltext/s0969-2126(06)00060-8 .schwartz , r. , p. w. shor , p. e. prevelige , and b. berger , 1998 ._ biophys .75:262636 . http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1299938&tool=pmcentrez&rendertype=abstract .hagan , m. f. , and d. chandler , 2006 ._ biophys . j. _ 91:4254 .http://dx.doi.org/10.1529/biophysj.105.076851 .hicks , s. , and c. henley , 2006 . .http://pre.aps.org/abstract/pre/v74/i3/e031912 .nguyen , h. d. , v. s. reddy , and c. l. brooks , 2007 ._ nano lett ._ 7:338344 .http://dx.doi.org/10.1021/nl062449h .wilber , a. w. , j. p. k. doye , a. a. louis , e. g. noya , m. a. miller , and p. wong , 2007 ._ j. chem ._ 127:085106 .http://link.aip.org/link/?jcpsa6/127/085106/1 .nguyen , h. d. , and c. l. brooks , 2008 ._ nano lett ._ 8:457481 .http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2772182&tool=pmcentrez&rendertype=abstract .nguyen , h. d. , v. s. reddy , and c. l. brooks , 2009 ._ 131:260614 .http://dx.doi.org/10.1021/ja807730x .johnston , i. g. , a. a. louis , and j. p. k. doye , 2010 .modelling the self - assembly of virus capsids ._ j. phys . :condens . matter _ 22 .wilber , a. w. , j. p. k. doye , a. a. louis , and a. c. f. lewis , 2009 . ._ the journal of chemical physics _ 131:. http://scitation.aip.org/content/aip/journal/jcp/131/17/10.1063/1.3243581 .wilber , a. w. , j. p. k. doye , and a. a. louis , 2009 ._ j. chem ._ 131:175101 .http://link.aip.org/link/?jcpsa6/131/175101/1 .rapaport , d. , j. johnson , and j. skolnick , 1999 . ._ 121 - 122:231235 .http://dx.doi.org/10.1016/s0010-4655(99)00319-7 .rapaport , d. , 2004 . .http://pre.aps.org/abstract/pre/v70/i5/e051905 .rapaport , d. , 2008 . .http://prl.aps.org/abstract/prl/v101/i18/e186101 .hagan , m. f. , o. m. elrad , and r. l. jack , 2011 ._ j. chem ._ 135:104115 .http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3292593&tool=pmcentrez&rendertype=abstract .ayton , g. s. , and g. a. voth , 2010 ._ biophys . j. _http://www.cell.com/biophysj/fulltext/s0006-3495(10)00987-2 .chen , b. , and r. tycko , 2011 .simulated self - assembly of the hiv-1 capsid : protein shape and native contacts are sufficient for two - dimensional lattice formation ._ biophys j _ 100:30353044 .perlmutter , j. d. , c. qiao , and m. f. hagan , 2013 .viral genome structures are optimal for capsid assembly ._ elife _ 2:e00632 .elrad , o. m. , and m. f. hagan , 2010 ._ 7:45003 .http://stacks.iop.org/1478-3975/7/i=4/a=045003 .mahalik , j. p. , and m. muthukumar , 2012 ._ j. chem ._ 136:135101 .http://www.ncbi.nlm.nih.gov/pubmed/22482588 .zhang , r. , and p. linse , 2013 . ._ journal of chemical physics _ 138 .cooke , i. r. , and m. deserno , 2005 . ._ j. chem ._ 123:224710 .wales , d. j. , 2005 .the energy landscape as a unifying theme in molecular science .a _ 363:357375 .fejer , s. n. , t. r. james , j. hernandez - rojas , and d. j. wales , 2009 .energy landscapes for shells assembled from pentagonal and hexagonal pyramids .chem . chem ._ 11:20982104 .hamard - peron , e. , and d. muriaux , 2011 . _ retrovirology _ 8:15 . http://www.retrovirology.com/content/8/1/15 .johnson , m. c. , h. m. scobie , y. m. ma , and v. m. vogt , 2002 ._ j. virol ._ 76:1117711185 .http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=retrieve&db=pubmed&dopt=citation&list_uids=12388677 .baumgrtel , v. , s. ivanchenko , a. dupont , m. sergeev , p. w. wiseman , h .-krusslich , c. bruchle , b. mller , and d. c. lamb , 2011 .cell biol ._ 13:469474 .http://www.ncbi.nlm.nih.gov/pubmed/21394086 .anderson , j. a. , c. d. lorenz , and a. travesset , 2008 . ._ j. comput ._ 227:53425359 . http://dx.doi.org/10.1016/j.jcp.2008.01.047 .nguyen , t. d. , c. l. phillips , j. a. anderson , and s. c. glotzer , 2011 . ._ 182:23072313 . http://linkinghub.elsevier.com/retrieve/pii/s0010465511002153 .andersen , h. c. , 1980 . ._ j. chem ._ 72:2384 .http://link.aip.org/link/?jcpsa6/72/2384/1 .to ensure that this algorithm maintain constant tension in the presence of curved membranes during budding , we measured local areal densities .we found that , away from subunits , the mean density and magnitude of fluctuations were unchanged throughout budding , indicating that the imposed tension remained at zero .we did measure small variations in areal density in the vicinity of budding particles .these variations could be explained by subunit - lipid interactions and curvature , but might also indicate local nonzero tensions .however , variations in local forces should be expected in the vicinity of adsorbed particles in physical membranes as well .lingwood , d. , and k. simons , 2010 .lipid rafts as a membrane - organizing principle . _ science _ 327:4650 .kerviel , a. , a. thomas , l. chaloin , c. favard , and d. muriaux , 2013 .virus assembly and plasma membrane domains : which came first ?_ virus research _ 171:332340 .parton , d. l. , a. tek , m. baaden , and m. s. p. sansom , 2013 .formation of raft - like assemblies within clusters of influenza hemagglutinin observed by md simulations ._ plos computational biology _9:e1003034e1003034 .ivanchenko , s. , w. j. godinez , m. lampe , h .-krusslich , r. elis , c. bruchle , b. mller , d. c. lamb , and c. bra , 2009 . ._ plos pathog .hagan , m. f. , 2013 . .arxiv:1301.1657 .balasubramaniam , m. , and e. o. freed , 2011 ._ physiology _ 26:23651 . http://physiologyonline.physiology.org/content/26/4/236.full .kivenson , a. , and m. f. hagan , 2010 ._ biophys . j. _http://dx.doi.org/10.1016/j.bpj.2010.04.035 .phillips , r. b. , j. kondev , j. theriot , and h. garcia , 2013 .physical biology of the cell .garland science , new york , 2 edition .lipowsky , r. , 1993 . ._ biophys . j. _ 64:11331138 .http://dx.doi.org/10.1016/s0006-3495(93)81479-6 .grant , j. , r. l. jack , and s. whitelam , 2011 ._ j. chem ._ 135:214505 .http://link.aip.org/link/?jcpsa6/135/214505/1 .rapaport , d. c. , 2010 ._ 7:045001 .http://stacks.iop.org/1478-3975/7/i=4/a=045001 .jones , c. t. , l. ma , j. w. burgner , t. d. groesch , c. b. post , and r. j. kuhn , 2003 . ._ j. virol ._ 77:71437149 .http://jvi.asm.org/content/77/12/7143.short .yue , t. , and x. zhang , 2011 . ._ soft matter _http://pubs.rsc.org/en/content/articlehtml/2011/sm/c1sm05398a .mcmahon , h. t. , and j. l. gallop , 2005 . ._ nature _ 438:590596 .doherty , g. j. , and h. t. mcmahon , 2009 . ._ annu . rev ._ 78:857902 .taylor , m. p. , o. o. koyuncu , and l. w. enquist , 2011 ._ 9:42739 .http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3229036&tool=pmcentrez&rendertype=abstract .gladnikoff , m. , e. shimoni , n. s. gov , and i. rousso , 2009 ._ biophys . j. _ 97:24192428 . http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2770610&tool=pmcentrez&rendertype=abstract .cooke , i. , k. kremer , and m. deserno , 2005 . .http://pre.aps.org/abstract/pre/v72/i1/e011506 .weeks , j. d. , d. chandler , and h. c. andersen , 1971 . ._ j. chem ._ 54:5237 .http://link.aip.org/link/?jcpsa6/54/5237/1 .grest , g. , and k. kremer , 1986 . .we model the amphiphilic lipids comprising the membrane with a coarse grained implicit solvent model from cooke et al , in which each amphiphile is represented by one head bead and two tail beads that interact via wca potentials , equation ( [ eq : wca ] ) with and is chosen to ensure an effective cylindrical lipid shape : and , where will turn out to be the typical distance between beads within a model lipid molecule .the beads belonging to a given lipid are connected through fene bonds ( eq . ( [ eq : bond ] ) ) and the linearity of the molecule is achieved via a harmonic spring with rest length between the first and the third bead , eq .( [ eq : bend ] ) \label{eq : bond}\ ] ] since this is an implicit solvent model , hydrophobicity is represented by an attractive interaction , equation ( [ eq : hydrophobicity ] ) , between all tail beads . the molecules belonging to the domain are labeled _ d _ , while those forming the rest of the membrane are referred as _m_. the interaction between molecules of the same type is the same , but the strength of the effective hydrophobic interaction between molecules of different type is lower : where the interaction between the molecules of the same type is given by , and the cross term , , is a parameter that controls the strength of the line tension between domains . varying from 0 to tunes the line tension , from a large value to 0 .the energy of the domain border is proportional to the line tension and the domain perimeter equation ( [ eq : domain_energy ] ) where is the domain radius and the line tension .we estimated the energy stored in the edge from the difference between the stress components normal and tangent to the interface , and obtained a linear relation between the line tension and the attractive interaction strength between lipid molecules of different type ( fig .[ fig : line_tension]a , eq .( [ eq : line_tension ] ) ) : with for .the membrane phase behavior as a function of the line tension is shown in ( fig .[ fig : line_tension]b ) . for small values of membrane composition is homogeneous , for intermediate values a phase - separated membrane is stable , while for high values of interfacial tension drives budding of the entire domain .this model allows the formation of bilayers with physical properties such as fluidity , area per molecule and bending rigidity that are easily tuned via .moreover , diffusivity within the membrane , density , and bending rigidity are in good agreement with values of these parameters measured for biological membranes . our model for capsid assemblyis based on the model for capsids developed by wales , but has been extended to allow for physically realistic interactions with the membrane .our capsid subunit is a rigid body with a pentagonal base and radius of formed by 15 attractive and 10 repulsive interaction sites . while the original model contained 5 attractive and 5 repulsive sites , the new sites that we have added ( figs .[ fig : fullcapsomer ] and [ fig : capsomer_model ] ) are necessary to describe assembly on a fluctuating surface .the effects of their inclusion are shown in the following sections .subunit assembly is mediated through a morse potential between ` attractor ' pseudoatoms located in the pentagon plane , with one located at each subunit vertex and 2 along each edge .attractions occur between like attractors only , meaning that there are vertex - vertex and edge - edge attractions , but no vertex - edge attractor interactions , equation ( [ eq : attractors ] ) where is the interaction strength between vertex sites , is the interaction strength between edge sites , is the distance between sites _m _ and _ n_,with _ m _ running over the attractor sites on the vertices of the first capsomer , and _n _ running over the vertices on the second . is the analogous distance between the edge sites on each of the capsomers . is the equilibrium pair distance and defines the range of the interaction . in comparison to the original model the additional attractive sites provide a stronger driving force for formation of structures with the lowest energy face - face angles and thus provide additional thermodynamic stabilization of the lowest energy dodecahedron capsid structure .this increased stabilization of the curved , icosahedral shape is necessary to compete with membrane bending energy which favors flat aggregates .although we did observe assembly on the membrane with the original model for carefully tuned parameters , the improved model is much more robust , meaning that it undergoes assembly over a much wider range of parameter values ( fig .[ fig : newattractors_effect ] ) .the 10 repulsive interaction sites are separated into 5 ` top ' and 5 ` bottom ' sites , which are arranged symmetrically above and below the pentagon plane respectively , so as to favor a subunit - subunit angle consistent with a dodecahedron ( 116 degrees ) .they are at distance from the capsomer plane , and their projections on that plane lie on each of the pentamer radii , at a distance to the corner .the ratio is the same as in the original model ( fig .[ fig : capsomer_model ] and fig .[ fig : geometry_sigma ] ) .the interaction potential between top and bottom sites on two capsomers is similar to that in the original model but extended to all the sites : where is the distance between the top sites , with and running over the 5 top sites of each of the capsomers , and is the distance between and , with running over the bottom sites of the first capsomer and running over the top and bottom sites of the second one . is , as in the original model , the distance between two adjacent top sites in a complete capsid at its lowest energy configuration , and is obtained from the geometry depicted in fig .[ fig : geometry_sigma ] : where .similarly , was initially set to the distance between the top and bottom sites of two adjacent capsomers in a complete capsid , but then was adjusted to to optimize assembly behavior .we changed the form of the repulsive sites in the original model ( one top and one bottom site ) to 5 sites for the following reasons . from exploratory simulations, we found that membrane - subunit interactions significantly constrained relative orientations of nearby adsorbed subunits for physically relevant values of the membrane bending modulus .therefore , association can proceed only through a relatively narrow range of face - face angles ( in comparison to the angles available for association in solution ) . as the partial capsid grows , the accessible range of angles narrows even further ( fig . [ fig : geom_assembly]a ) .increasing the number of repulsive sites and moving them closer to the capsomer plane enables a decrease in the interaction range , which allows a wider range of approach angles ( fig .[ fig : wales_pot ] ) while maintaining the equilibrium angle at the same value as for the original model .moreover , the reduction of the interaction cutoff reduced computation times by nearly a factor of 3 .the attractive subunit - membrane interaction is mediated by six sites , one at each of the five vertices and one at the center of the capsomer .each site sits at a distance from the pentamer plane ( fig .[ fig : fullcapsomer ] ) .these new interaction sites interact only with the tails of the lipid molecules ; in simulations with a domain the sites interact only with domain lipid tails .the attractor - tail interaction is the same as the tail - tail interaction except that there is no repulsive component , as if the attractors were point - particles with no excluded volume : where is the distance between a capsomer adhesion site and the tail bead of a lipid .a layer of 35 beads arranged in the shape of a pentagon is added to the capsomer base to prevent the overlap of viral subunits and membrane lipids ( fig .[ fig : fullcapsomer ] ) . these beadsinteract via an excluded volume potential with all lipid beads : the adhesion free energy per capsomer was estimated from the calculation of the interaction between the matrix protein attractive site and the lipid tail beads lying inside its interaction range .the number of interacting beads depends on the matrix protein penetration into the membrane ( fig.[fig : adhesion_energy_geo]a ) , so the free energy was integrated over the accessible values of the penetration : where is the standard volume , the range of possible penetrations where capsomer adhesion sites experience attractive interactions with the membrane , and the interaction energy for a given penetration : we found that the adhesion free energy is linearly related to : with .note that this estimate overestimates the adsorption free energy , since it does not include entropic penalties suffered by lipid molecules upon subunit adsorption .the parameters for the membrane are chosen from ref . so that the bilayer is in a fluid state .we set the temperature of our simulations to and the lipid - lipid interaction range to , both in equation ( [ eq : hydrophobicity ] ) and equation ( [ eq : adh_hydrophobicity ] ) .the bending rigidity for these values is and the areal density of lipids .the parameters for the virus model were set according to the phase diagrams of the original model and our exploratory simulations of assembly on a membrane .we found that the optimal parameters that allow large assembly yields for a wide range of concentrations for are : , , , , , , , , , and , with the subindices ` m ' and ` v ' referring to the units of the membrane and virus models respectively . in order to couple both models , we used the membrane units as the fundamental units and rewrote the parameters for capsid assembly as functions of them .the units of energy , length , and time in our simulations were then respectively , and .the values of the capsid parameters were chosen so that the total energy of assembly exceeds the bending energy of wrapping the capsid . in this way, the unit of energy of the assembly model is set to , the length and time parameters are the same of those of the membrane model , and . since , the energy needed for assembly onthe membrane is above the optimal energy for bulk assembly .finally , the thickness of the capsomer is , and the total mass of a capsomer is .the remaining parameters can be assigned physical values by setting the system to room temperature , , and noting that the typical width of a lipid bilayer is around 5 nm , and the mass of a typical phospholipid is about 660 g / mol .the units of our system can then be assigned as follows : nm , g / mol , , and ps . , calculated from the difference betweenthe stress components normal and tangent to the interface .the solid line is a linear fit the data ( eq . ( [ eq : line_tension ] ) ) . *( b ) * phase diagram of the domain behavior as a function of the domain radius and the line tension obtained from molecular dynamic simulations of a membrane with a domain at .the possible outcomes are indicated as : domain dissolution ( ) , domain in equilibrium with the membrane ( ) , and spontaneous budding of the whole domain ( ) ., scaledwidth=80.0% ] and edge length .the projections of the new repulsive sites on the capsomer plane lie on each of the pentamer radii , at distances from the nearest vertex and from the pentamer edge .their distances from the capsomer plane , , and keep the same proportions as those in the original model . *( right ) : * geometry of capsomer - capsomer binding .for two adjacent pentamers in a complete capsid , the distance between two opposite repulsive sites is and the equilibrium angle is ,scaledwidth=60.0% ] .the lipid tails are represented in blue and the heads in red .the tail beads that lie inside the matrix protein interaction volume are showed in solid blue and they are confined between and in the z - direction .the potential cutoff is given by .* b ) * geometry used for the integration of the adhesion energy . the energy contribution of every point inside a disk of radius and widthdz is integrated inside the interaction volume ( represented in blue).,scaledwidth=60.0% ] and .the color code represents the outcome type and follows the same format as in fig .[ fig : phase_diag1 ] of the main text : succesful assemby ( green ) , budding of a partial capsid ( yellow ) , stalled assemby with wrapping ( red ) and malformed assembly ( violet).,scaledwidth=50.0% ] and with varying and time steps between subunit injections . * ( a ) * slices of configurations at different times for and . a dimer associates with a strained geometry to the growing capsid ; therefore , the next subunit is prevented from proper association and a malformed capsid arises . *( b ) * two partial capsids nucleate and then coalesce into a malformed assemblage , which then drives budding of the entire domain .parameters are and . *( c ) * high values for the adhesion strength and injection rate lead to formation of a flat aggregate on the membrane . *( d ) * an intermediate adhesion strength and high injection rate lead to formation of a partial capsid trapped within a flat aggregate . both top and side viewsare shown for ( c ) and ( d ) ., scaledwidth=60.0% ] and , , and .side views of the process are shown , with indicated times since the simulation started ( ) . *( a ) * when the last subunit is injected ( ) , the capsid is already half formed . *( b ) * two partial aggregates are formed , and * ( c ) * assemble into a malformed capsid . *( d ) * the capsomers rearrange into an almost finished capsid . * ( e ) * the last subunit assembles and * ( f ) * the capsid buds , scaledwidth=75.0% ] and .the most frequent outcome is shown for every set of parameters ( with symbols as defined in fig .[ fig : phase_diag1 ] of the main text ) .the asterisks indicate that other behaviors are observed in some trajectories , with the asterisk color representing the nature of the alternative outcomes .red asterisks indicate that some trajectories resulted in incomplete assembly with wrapping , green asterisks indicate that some trajectories resulted in complete assembly and wrapping , and black asterisks indicate that the alternative behavior is the budding of the whole raft with a malformed capsid as shown in figure [ fig : kinetic_traps]b ., scaledwidth=50.0% ]
for many viruses assembly and budding occur simultaneously during virion formation . understanding the mechanisms underlying this process could promote biomedical efforts to block viral propagation and enable use of capsids in nanomaterials applications . to this end , we have performed molecular dynamics simulations on a coarse - grained model that describes virus assembly on a fluctuating lipid membrane . our simulations show that the membrane can promote association of adsorbed subunits through dimensional reduction , but also can introduce barriers that inhibit complete assembly . we find several mechanisms , including one not anticipated by equilibrium theories , by which membrane microdomains , such as lipid rafts , can enhance assembly by reducing these barriers . we show how this predicted mechanism can be experimentally tested . furthermore , the simulations demonstrate that assembly and budding depend crucially on the system dynamics via multiple timescales related to membrane deformation , protein diffusion , association , and adsorption onto the membrane . insert received for publication date and in final form date.correspondence : hagan.edu
polar ice core records provide some of the most detailed views of past environmental changes up to 800000yr before present , in large part via proxy data such as the water isotopic composition and embedded chemical impurities .one of the most important features of ice cores as climate archives , is their continuity and the potential for high temporal resolution .greenland ice cores are particularly well suited for high resolution paleoclimatic studies , because relatively high snow accumulation rates allow seasonal changes in proxy data to be identified more than 50000yr in the past .the isotopic signature of polar precipitation , commonly expressed through the notation notation : where and is related to the temperature gradient between the evaporation and condensation site and has so far been used as a proxy for the temperature of the cloud at the time of condensation .one step further , the combined signal of and commonly referred to as the deuterium excess ( hereafter ) , constitutes a useful paleothermometer tool . via its high correlation with the temperature of the evaporation source , it has been used to resolve issues related to changes in the location of the evaporation site .a relatively recent advance in the use of water isotope ratios as a direct proxy of firn temperatures , has been introduced by .assessment of the diffusivity of the water isotopologues in the porous medium of the firn column can yield a temperature history , provided a dating model is available .the measurement of water stable isotopic composition is typically performed off - line via discrete sampling with traditional isotope ratio mass spectrometry ( hereafter irms ) . while high precision and accuracy can routinely be achieved with irms systems , water isotope analysis remains an elaborate process , which is demanding in terms of sample preparation , power consumption , sample size , consumables , isotope standards and carrier gases .the analysis of a deep ice core at its full length in high resolution ( typically 2.5 to 5 cm per sample ) requires the processing of a vast amount of water samples and can take years to complete .additionally , these procedures often come at the expense of not fully exploiting the temporal resolution available in the ice core .laser spectroscopy in the near and mid infrared region has been demonstrated as a potential alternative for water isotope analysis , presenting numerous advantages over irms .a major advantage of the technique is the ability to directly inject the sampled water vapour in the optical cavity of the spectrometer where both isotopic ratios and are measured simultaneuously .in contrast , in the most common irms techniques water is not measured as such , but has to be converted to a different gas prior to measurement . for analysis ,the equilibration method has been widely used , whereas analysis commonly involves the reduction of water to hydrogen gas over hot uranium , or chromium . however , the combined use of these two methods rules out simultaneous analysis of both water isotopologues on a given sample .more recently , in combination with the use of continuous flow mass spectrometers , conversion of water to and is performed in a pyrolysis furnice and allows simultaneous and measurement , but still on a single discrete sample .one of the drawbacks of this technique is the interference of , formed at the ion source by the reaction of and with the signal at _m / z_ .nowadays , commercial ir spectrometers are available with a precision comparable to irms systems .these units typically receive a continuous stream of water vapor and offer ease of use and portability .the analysis of another set of ice core proxies , that of chemical impurities , has similarly been an elaborate process , traditionally performed with liquid techniques . with the advent of continuous flow analysis ( heareafter cfa ) from continuously melted ice core segments , the measurement of chemical impurities has reached the point of largely exploiting the high resolution available in the core while it is often performed in the field .the continuous , on - line nature of the technique has resulted in a considerable reduction in sample preparation and processing times .recently , demonstrated the measurement of mixing ratios in an on - line semi continuous mode with the use of a gas chromatograph combined with a pulsed discharge and a thermal conductivity detector . here, we demonstrate the ability to perform continuous measurements of water isotope ratios from a stream of water vapor derived from a continuously melting ice rod by coupling a commercial ir spectrometer to a cfa system via a passive , low volume flash evaporation module . in the following ,we assess the system s precision , accuracy , and efficient calibration .we then comment on issues related to sample dispersion in the sample transfer lines , the evaporation module and the optical cavity of the spectrometer itself in order to determine the expected smoothing imposed on the acquired data sets .finally , isotopic analysis of ice core samples from the neem deep ice core are presented and compared to measurements performed in discrete mode .in the system described here , ( fig . [ fig1 ] ) an ice rod measuring 3.2.2 cm ( hereafter cfa run ) is continuously melted on a copper , gold - nickel coated melter at a regulated temperature of 20 .the concentric arrangement of the melter s surface facilitates the separation of the sample that originates from the outer and inner part of the core .approximately 90% of the sample from the inner part is transfered to the analytical system by means of a peristaltic pump with a flow rate of 16mlmin .this configuration provides an overflow of % from the inner to the outer part of the melter and ensures that the water sample that is introduced into the analytical system is not contaminated .a stainless steel weight sitting on top of the ice rod enhances the stability and continuity of the melting process .an optical encoder connected to the stainless steel weight , records the displacement of the rod .this information is used to accurately define the depth scale of the produced water isotope data .breaks in the ice rod are logged prior to the melting process and accounted for , during the data analysis procedure .gases included in the water stream originating from the air bubbles in the ice core are extracted in a sealed debubbler , with a volume of .the melt rate of the present system is approximately 3.2cmmin , thus resulting in an analysis time of per cfa run . during the intervals between cfa runs , mq and a total organic content less than 10ppb .] water is pumped through the system . a 4-port injection valve ( v1 in fig .[ fig1 ] ) allows the selection between the mq and sample water .the mq water is spiked with isotopically enriched water containing 99.8atom% deuterium , cortecnet inc . ) in a mixing ratio of . in this waya distinction between sample and mq water is possible , facilitating the identification of the beginning and end times of a cfa run . for further details on the analysis of chemical components or the extraction of gases for greenhouse gas measurementsthe reader is refered to and .we follow the same approach as previously presented in by coupling a commercially available cavity ring down ir spectrometer ( hereafter ws - crds ) purchasecd from picarro inc .( picarro l1102-i ) .the spectrometer operates with a gas flow rate of 30standardmlmin . in the optical cavitythe pressure is regulated at 47 mbar with two proportional valves in a feedback loop configuration up- and down - stream of the optical cavity at a temperature of 80 . the high signal to noiseratio achieved with the cavity ring down configuration in combination with fine control of the environmental parameters of the spectrometer , result in a performance comperable to modern mass spectrometry systems taylored for water stable isotope analysis .a 6-port injection valve ( v2 in fig .[ fig1 ] ) selects sample from the cfa line or a set of local water standards .the isotopic composition of the local water standards is determined with conventional irms and reported with respect to vsmow standard . a 6-port selection valve ( v3 in fig .[ fig1 ] ) is used for the switch between different water standards . a peristaltic pump ( p3 in fig .[ fig1 ] ) in this line with variable speeds , allows adjustment of the water vapor concentration in the spectrometer s optical cavity , by varying the pump speed . in that way, the system s sensitivity to levels of different water concentration can be investigated and a calibration procedure can be implemented .we use high purity perfluoroalkoxy ( pfa ) tubing for all sample transfer lines .injection of water sample into the evaporation oven takes place via a 40 m fused silica capillary where immediate and 100% evaporation takes place avoiding any fractionation effects .the setpoint of the evaporation temperature is set to 170 and is regulated with a pid controller .the amount of the injected water to the oven can be adjusted by the pressure gradient maintained between the inlet and waste ports of the t1 tee - split ( fig .[ fig1 ] ) .the latter depends on the ratio of the inner diameters of the tubes connected to the two ports as well as the length of the waste line .the total water sample consumption is .1mlmin maintained by the peristaltic pump p2 ( fig . 1 ) .for a detailed description of the sample preparation and evaporation module the reader may reffer to . a smooth and undisturbed sample delivery to the spectrometer at the level of results in optimum performance of the system .fluctuations of the sample flow caused by air bubbles or impurities are likely to result in a deteriorated performance of the measurement and are occasionally observed as extreme outliers on both and measurements . the processes that control the occurence of these events are still not well understood .in this study we present data collected in the framework of the neem ice core drilling project .measurements were carried out in the field during the 2010 field season and span 919.05 m of ice core ( depth interval 1281.52200.55 m ) . herewe exemplify the performance of the system over a section of ice from the depth interval 1382.1521398.607 .the age of this section spans with a mean age of 10.9kab2k .the reported age is based on a preliminary time scale constructed by stratigraphic transfer of the gicc05 time scale from the ngrip to the neem ice core . in fig .[ fig2 ] we present an example of raw data as acquired by the system .this data set covers 7 cfa runs ( 7.70 m of ice ) . a clear baseline of the isotopically heavier mq water can be seen in between cfa runs . at can observe a sudden drop in the signal of the water concentration due to a scheduled change of the mq water tank .adjacent to this , both and signals present a clear spike , characteristic of the sensitivity of the system to the stability of the sample flow rates . before any further processing we correctthe acquired data for fluctuations of the water concentration in the optical . to a good approximation the and signals show a linear response to differences in water concentrations around 20000ppmv .a correction is performed as : }{20\,000} ] to be equal to mm . in a similar manner $ ] is found to be equal to mm . the higher value calculated with the spectral method points to the additional diffusion of the sample at the melter and debubbler system that could not be considered in the analysis based on the step response . the impulse response of the system based on the updated value of is presented in fig . [ fig6 ] .in the ideal case of a noise - free measured signal and provided that the transfer function is known , one can reconstruct the initial isotopic signal from eq .( [ eq.2 ] ) as : the integral operation denotes the inverse fourier transform and with being the wavelength of the isotopic signals . in the presence of measurement noise , this approach will fail due to excess amplification of the high frequency noise channels in the spectrum of the signal .hereby we use the wiener approach in deconvoluting the acquired isotopic signals for the diffusion that takes place during the measurement .considering a measured isotopic signal optimal filter can be constructed that when used at the deconvolution step , it results in an estimate of the initial isotopic signal described as : that and are uncorrelated signals , the optimal filter is given by : ; where and are the power spectral densities of the signals and . in the same fashion as in the previous sectionwe assume that the spectrum of the noise free measured signal , is described by eq .( [ power ] ) where . regarding the noise , we assume red noise described by an ar1 process. the spectrum of the noise signal will then be described by : is the variance of the noise and is the coefficient of the ar1 process .we vary the parameters , , and so that the sum fits the spectrum of the measured signal .the set of parameters that results in the optimum fit is used to calculate the optimal filter . the constructed filters together with the transfer functions that were calculated based on the two different techniques outlined in sect . [ section_resolution ]are illustrated in fig .one can observe how the restoration filters work by amplifying cycles with wavelengths as low as 7 mm .beyond that point , the shape of the optimal filter attenuates cycles with higher frequency , which lie in the area of noise .an example of deconvoluted data section is given in fig .[ fig10 ] .it can be seen that the effect of the optimal filtering results in both the amplification of the signals that are damped due to the instrumental diffusion , as well as in the filtering of the measurement noise .combining and gives the deuterium excess as .the noise level of the signal can be calculated by the estimated noise levels of and as : as seen in fig .[ fig11 ] , the signal presents a low signal to noise ratio . in this case, the technique of optimal filtering can effectively attenuate unwanted high frequency noise components , thus reveiling a signal . the latter offers the possibility for the study of abrubt transitions as they have previously been investigated in , and time series from discrete high resolution samples .the on - line fashion in which these measurements are performed has the potential to yield not only higher temporal resolution but also better statistics for those climatic transitions .[ summary and conclusions ] we have succesfully demonstrated the possibility for on - line water isotopic analysis on a continuously melted ice core sample . we used an infrared laser spectrometer in a cavity ring down configuration in combination with a continuous flow melter system .a custom made continuous stream flash evaporator served as the sample preparation unit , interfacing the laser spectrometer to the melter system .local water standards have been used in order to calibrate the measurements to the vsmow scale .additionally , dependencies related to the sample size in the optical cavity have been accounted for .the melting procedure is recorded by an optical encoder that provides the necessary information for assigning a depth scale to the isotope measurements .we verified the validity of the applied calibrations and the calculated depth scale by comparing the cfa measurements with measurements performed on discrete samples in 5 cm resolution . by means of spectral methodswe provide an estimate of the noise level of the measurements .the uncertainty of the measurement is estimated at .06 , 0.2 , and 0.5 for , and , respectively .this performance is comparable to , or better than the performance typically achieved with conventional irms systems in a discrete mode .based on the isotopic step at the beginning of each cfa run , the impulse response , as well as the transfer function of the system can be estimated .we show how this method does not take into account the whole cfa system , thus underestimating the sample diffusion that takes place from the melter until the optical cavity of the spectrometer .we proposed a different method that considers the power spectrum of the cfa data in combination with the spectrum of a data set over the same depth interval measured in a discrete off - line fashion .the use of the optimal filtering deconvolution technique , provides a way to deconvolute the measured isotopic profiles for apparent sample dispersion effects .the combination of infrared spectroscopy on gaseuous samples with continuous flow melter systems provides new possibilities for ice core science.the non destructive , continuous , and on - line technique offers the possibility for analysis of multiple species on the same sample in high resolution and precision and can potentially be performed in the eld .we would like to thank dorthe dahl jensen for supporting our research .numerous drillers , core processors and general field assistants have contributed to the neem ice core drilling project with weeks of intensive field work .withought this collective effort , the measurements we present here would not be possible .bruce vaughn and james white have contributed to this project with valuable comments and ideas .this project was partly funded by the marie curie research training network for ice sheet and climate evolution ( mrtn - ct-2006 - 036127 ) . + + edited by : p. werle accoe , f. , berglund , m. , geypens , b. , and taylor , p. : methods to reduce interference effects in thermal conversion elemental analyzer / continuous flow isotope ratio mass spectrometry measurements of nitrogen - containing compounds , rapid commun ., 22 , 22802286 , 2008 .begley , i. s. and scrimgeour , c. m. : high - precision and measurement for water and volatile organic compounds by continuous - flow pyrolysis isotope ratio mass spectrometry , anal ., 69 , 15301535 , 1997 .brand , w. a. , geilmann , h. , crosson , e. r. , and rella , c. w. : cavity ring - down spectroscopy versus high - temperature conversion isotope ratio mass spectrometry ; a case study on and of pure water samples and alcohol / water mixtures , rapid commun ., 23 , 18791884 gkinis , v. , popp , t. j. , johnsen , s. j. , and blunier , t. : a continuous stream flash evaporator for the calibration of an ir cavity ring down spectrometer for isotopic analysis of water , isot .health s. , 46 , 113 , 2010 .gupta , p. , noone , d. , galewsky , j. , sweeney , c. , and vaughn , b. h. : demonstration of high - precision continuous measurements of water vapor isotopologues in laboratory and remote field deployments using wavelength - scanned cavity ring - down spectroscopy ( ws - crds ) technology , rapid commun ., 23 , 25342542 , 2009 .huber , c. and leuenberger , m. : fast high - precision on - line determination of hydrogen isotope ratios of water or ice by continuous - flow isotope ratio mass spectrometry , rapid commun ., 17 , 13191325 , 2003 .johnsen , s. j. , clausen , h. , dansgaard , w. , fuhrer , k. , gundestrup , n. , hammer , c. , iversen , p. , jouzel , j. , stauffer , b. , and steffensen , j. : irregular glacial interstadials recorded in a new greenland ice core , nature , 359 , 311313 , 1992 .johnsen , s. j. , clausen , h. b. , cuffey , k. m. , hoffmann , g. , schwander , j. , and creyts , t. : diffusion of stable isotopes in polar firn and ice , the isotope effect in firn diffusion , in : physics of ice core records , edited by : hondoh , t. , hokkaido university press , sapporo , 121140 , 2000 .johnsen , s. j. , dahl jensen , d. , gundestrup , n. , steffensen , j. p. , clausen , h. b. , miller , h. , masson - delmotte , v. , sveinbjrnsdttir , a. e. , and white , j. w. c. : oxygen isotope and palaeotemperature records from six greenland ice - core stations : camp century , dye-3 , grip , gisp2 , renland and northgrip , j. quaternary sci ., 16 , 299307 , 2001 .jouzel , j. , alley , r. b. , cuffey , k. m. , dansgaard , w. , grootes , p. , hoffmann , g. , johnsen , s. j. , koster , r. d. , peel , d. , shuman , c. a. , stievenard , m. , stuiver , m. , and white , j. w. c. : validity of the temperature reconstruction from water isotopes in ice cores , j. geophys .oceans , 102 , 2647126487 , 1997 .kaufmann , p. r. , federer , u. , hutterli , m. a. , bigler , m. , schpbach , s. , ruth , u. , schmitt , j. , and stocker , t. f. : an improved continuous flow analysis system for high - resolution field measurements on ice cores , environ ., 42 , 80448050 , 2008 .kerstel , e. r. t. , van trigt , r. , dam , n. , reuss , j. , and meijer , h. a. j. : simultaneous determination of the , and isotope abundance ratios in water by means of laser spectrometry , anal ., 71 , 52975303 , 1999 .rasmussen , s. o. , andersen , k. k. , johnsen , s. j. , bigler , m. , and mccormack , t. : deconvolution - based resolution enhancement of chemical ice core records obtained by continuous flow analysis , j. geophys ., 110 , d17304 , 2005 .rasmussen , s. o. , andersen , k. k. , svensson , a. m. , steffensen , j. p. , vinther , b. m. , clausen , h. b. , siggaard - andersen , m. l. , johnsen , s. j. , larsen , l. b. , dahl - jensen , d. , bigler , m. , rthlisberger , r. , fischer , h. , goto - azuma , k. , hansson , m. e. , and ruth , u. : a new greenland ice core chronology for the last glacial termination , j. geophys ., 111 , d06102 , 2006 .rthlisberger , r. , bigler , m. , hutterli , m. , sommer , s. , stauffer , b. , junghans , h. g. , and wagenbach , d. : technique for continuous high - resolution analysis of trace substances in firn and ice cores , environ .technol . , 34 , 338342 , 2000 .schpbach , s. , federer , u. , kaufmann , p. r. , hutterli , m. a. , buiron , d. , blunier , t. , fischer , h. , and stocker , t. f. : a new method for high - resolution methane measurements on polar ice cores using continuous flow analysis , environ .technol . , 43 , 53715376 , 2009 .steffensen , j. , andersen , k. , bigler , m. , clausen , h. , dahl - jensen , d. , fischer , h. , goto - azuma , k. , hansson , m. , johnsen , s. , jouzel , j. , masson - delmotte , v. , popp , t. j. , rasmussen , s. o. , rthlisberger , r. , ruth , u. , stauffer , b. , siggaard - andersen , m. l. , sveinbjrnsdttir , a. , svensson , a. , and white , j. w. c. : high - resolution greenland ice core data show abrupt climate change happens in few years , science , 321 , 680684 2008 .
a new technique for on - line high resolution isotopic analysis of liquid water , tailored for ice core studies is presented . we built an interface between a wavelength scanned cavity ring down spectrometer ( ws - crds ) purchased from picarro inc . and a continuous flow analysis ( cfa ) system . the system offers the possibility to perform simultaneuous water isotopic analysis of and on a continuous stream of liquid water as generated from a continuously melted ice rod . injection of sub amounts of liquid water is achieved by pumping sample through a fused silica capillary and instantaneously vaporizing it with 100% efficiency in a home made oven at a temperature of 170 . a calibration procedure allows for proper reporting of the data on the vsmow slap scale . we apply the necessary corrections based on the assessed performance of the system regarding instrumental drifts and dependance on the water concentration in the optical . the melt rates are monitored in order to assign a depth scale to the measured isotopic profiles . application of spectral methods yields the combined uncertainty of the system at below 0.1 and 0.5 for and , respectively . this performance is comparable to that achieved with mass spectrometry . dispersion of the sample in the transfer lines limits the temporal resolution of the technique . in this work we investigate and assess these dispersion effects . by using an optimal filtering method we show how the measured profiles can be corrected for the smoothing effects resulting from the sample dispersion . considering the significant advantages the technique offers , i.e. measurement of and , potentially in combination with chemical components that are traditionally measured on cfa systems , notable reduction on analysis time and power consumption , we consider it as an alternative to traditional isotope ratio mass spectrometry with the possibility to be deployed for field ice core studies . we present data acquired in the field during the 2010 season as part of the neem deep ice core drilling project in north greenland .
in this paper we discuss the following problem of _ time series segmentation _ : _ _ given a time series , divide it into two or more _ segments _ ( i.e. blocks of contiguous data ) such that each segment is homogeneous , but contiguous segments are heterogeneous .homogeneity / heterogeneity is described in terms of some appropriate statistics of the segments . the term _ change point detection _ is also used to describe the problem .examples of this problem arise in a wide range of fields , including engineering , computer science , biology and econometrics .the segmentation problem is also relevant to hydrology and environmetrics .for instance , in climate change studies it is often desirable to test a time series ( such as river flow , rainfall or temperature records ) for one or more sudden changes of its mean value .the time series segmentation problem has been studied in the hydrological literature .the reported approaches can be divided into two categories : _ sequential _ and _ nonsequential_. sequential approaches often involve _ intervention models _ ; see for example and , for a critique of intervention models , .most of the nonsequential time segmentation work appearing in the hydrological literature involves _ two _ segments . in other words ,the goal is to detect the existence and estimate the location of a _ single _ change point .a classical early study of changes in the flow of nile appears in .buishand s work is also often cited .for some case studies see .bayesian approaches have recently generated considerable interest .it appears that the _ multiple _ change point problem has not been studied as extensively .hubert s segmentation procedure is an important step in this direction .the goodness of a segmentation is evaluated by the sum squared deviation of the data from the means of their respective segments ; in what follows we will use the term _ segmentation cost _ for this quantity . given a time series, hubert s procedure computes the _ minimal cost _ segmentation with , 3 , ... change points .the procedure gradually increases ; for every value of the best segmentation is computed ; the procedure is terminated when differences in the means of the obtained segments are no longer statistically significant ( as measured by scheffe s contrast criterion ) .hubert mentions that this procedure can segment time series with several tens of terms but is `` ... unable at the present state to tackle series of much more than a hundred terms ... '' because of the combinatorial increase of computational burden .the work reported in this paper has been inspired by hubert s procedure .our goal is to develop an algorithm which can locate multiple change points in hydrological and/or environmental time series with several hundred terms or more . to achieve this goal, we adapt some _ hidden markov models ( hmm ) _ algorithms which have originally appeared in the speech recognition literature .( a survey of the relevant literature is postponed to section [ sec0303 ] . )we introduce a hmm of hydrological and/or enviromental time series with change points and describe an approximate _ expectation / maximization ( em ) algorithm _ which produces a converging sequence of segmentations .the algorithm also produces a sequence of estimates for the hmm parameters .time series of several hundred points can be segmented in a few seconds ( see section [ sec04 ] ) , hence the algorithm can be used in an interactive manner as an exploratory tool . even for timeseries of several thousand points the segmentation time is in the order of seconds .this paper is organized as follows . in section [ sec02 ]we review hubert s formulation of the time series segmentation problem . in section [ sec03 ]we formulate the segmentation problem in terms of hidden markov models and present a segmentation algorithm ; also we compare the hidden markov model approach with that of hubert .we present some segmentation experiments in section [ sec04 ] . in section [ sec05 ]we summarize our results . finally , in the appendix we present an alternative , non - hmm segmentation method , which is more accurate but also slower .in this section we formulate time series segmentation as an optimization problem .we follow hubert s presentation , but we modify his notation . given a time series = ( , , ... , ) and a number , a _ segmentation _ is a sequence of times = , , ... , which satisfy the intervals of integers , ] , ... , ] . now we define the function and note that + c(t , k , p)\right ) .\label{eq53}%\ ] ] note that , for simplicity of notation , we write as a function only of ; the quantities , , , * * , * * can be considered fixed . now consider a run of the segmentation algorithm which produces a sequence , , , , ... ._ suppose that for every _ _ we have _ . by the reestimation formula for we will have for every : furthermore , note that the viterbi algorithm yields the global maximum of the likelihood _ as a function of _ .hence , from ( [ eq53 ] ) and the reestimation formula for we will have for every : now , using first ( [ eq52 ] ) and then ( [ eq51 ] ) , we obtain and , from ( [ eq55 ] ) and ( [ eq53]), hence , _ if for every _ _ we have _ , then the sequence is increasing ; since it is also bounded from above by one , it must converge .it follows that the hmm segmentation algorithm produces a sequence of segmentations with increasing and convergent likelihood ; from convergence of the likelihood we also conclude that the algorithm will eventually terminate . furthermore, if is the segmentation obtained from is easy to check that from ( [ eq51 ] ) , ( [ eq57 ] ) follows that _hubert s segmentation cost is decreased in every iteration _ of the hmm segmentation algorithm . for the above analysis to hold ,we have required that for every .this condition is easy to check ; it is usually satisfied in practice ; and it can be _ enforced _ by choosing the parameter to be not too close to 1 ( if , then the cost of state transitions is very high and transitions are avoided ) . one way to interpretthe above analysis is the following : using an appropriate value of , the segmentation algorithm presented here becomes an iterative , approximate way to find hubert s optimal segmentation .the approximation is usually very good , as will be seen in section [ sec04 ] .this interpretation is completely nonprobabilistic and does not depend on the use of the hidden markov model .* computational issues*. we must also mention that succesful implementation of the viterbi algorithm requires a normalization of the s to avoid numerical underflow ; alternatively one can work with the logarithms of the the s and perform additions rather than multiplications .an extensive mathematical , statistical and engineering literature covers both the theoretical and applied aspects of hmm s .the reader can use as starting points for a broader overview of the subject .em - like algorithms for hmm s were introduced in .the em family of algorithms was introduced in great generality in ; work on hmm s also appears in the econometrics , as well as in the biological literature .these references are merely starting points ; the literature is very extensive .as already mentioned , the em segmentation algorithm used here is a variation of algorithms which are well - established in the field of speech recognition ; for example see . taking into account the extensive hmm literature , as well as various ideas reported in the hydrological literature, the algorithm of section [ sec03024 ] can be extended in several directions . 1 .the assumption that the observations are normally distributed is not essential .other forms of probability density can be used in ( [ eq09 ] ) .similarly , by a simple modification of ( [ eq09 ] ) the algorithm can handle vector valued observations .2 . a basic idea of the algorithmis that each segment must be _homogeneous_. assuming that the observations within a segment are generated independently and normally , segment homogeneity is evaluated by the deviation of from the segment mean . butalternative assumptions can be used .for example , assume that the observations are generated by an autoreggressive mechanism , i.e. that , for and , we have ( where is a white noise term ) .the segmentation algortithm can be used within this framework . in this casethe reestimation phase computes the ar , , , , which can be estimated from , , , using a least squares fitting algorithm .this approach is used in section [ sec0403 ] to fit a hmm autoregressive model to global temperature data .similarly , it may be assumed that the observations are generated by a polynomial regression of the form ( for and ) where is a noise term .again , the coefficients , , , can be computed at every reestimation phase by a least squares fitting algorithm .additional constraints can be used to enforce continuity across segments . in the case of 1st order polynomialsthere are only two coefficients , , , which are determined by the continuity assumptions ; the iterative reestimation of the change points can still be performed .this case may be of interest for detection of trends .it has been mentioned in section [ sec03021 ] that can also be reestimated in every iteration of the em algorithm .preserving the left - to - right structure implies that for and for all different from and , we have ; furthermore , for we have .the parameters can be estimated by .however , some preliminary experiments indicate that this approach does not yield improved segmentations .5 . on the other hand ,the treatment of the state transition can be modified in a more substantial manner by dropping the left - to - right assumption . in the current model each state of the markov chaincorresponds to a single segment and , because of the left - to - right structure , it is visited at most once .an alternate approach would be to assign some physical significance to the states .for instance , states could be chosen to correspond to climate regimes such as `` dry '' , `` wet '' etc . in this case a state could be visited more than once .this approach allows the choice of models which incorporate expert knowledge about the evolution of climate regimes . on the other hand , if the left - to - right structure is dropped , the number of free parameters in the matrix increases .these parameters could be estimated ( conditional on a particular state sequence ) by the enhancements of arbitrary transition structure and transition probability estimation are easily accommodated by our algorithm .in this section we evaluate the segmentation algorithm by numerical experiments .the first experiment involves an annual river discharge time series which contains 86 points .the second example involves the reconstructed annual mean global temperature time series and contains 282 points . both of these examples involve segmentation by minimization of total deviation from segment means .the third example again involves the annual mean global temperature time series , but performs segmentation by minimization of autoregressive prediction error .the fourth example involves artificially generated time series with up to 1500 points . in this experimentwe use the time series of the senegal river annual discharge data , measured at the bakel station for the years 1903 - 1988 .the length of the time series is 86 .the same data set has been used by hubert .the goal is to find the segmentation which is optimal with respect to total deviation from the segment means , has the highest possible order and is statistically significant according to scheffe s criterion .we run the segmentation algorithm for increasing values of . in the experiments reported herewe have always used ( similar results are obtained for other values of in the interval [ 0.85 , 0.95 ] . for every value of , convergence is achieved by the 3rd or 4th iteration of the algorithm .the optimal segmentations are presented in table 1 .the segmentations which were validated by the scheffe criterion appear in bold letters .[ c]|l|l|l|l|l|l|l|l| & + 1 & * 1902 * & * 1988 * & & & & & + 2 & * 1902 * & * 1967 * & * 1988 * & & & & + 3 & * 1902 * & * 1949 * & * 1967 * & * 1988 * & & & + 4 & * 1902 * & * 1917 * & * 1953 * & * 1967 * & * 1988 * & & + 5 & * 1902 * & * 1921 * & * 1936 * & * 1949 * & * 1967 * & * 1988 * & + 6 & 1902 & 1921 & 1936 & 1949 & 1967 & 1971 & 1988 + * table 1 * hence it can be seen that the optimal and statistically significant segmentation is that of order 5 , i.e. the segments are [ 1903,1921 ] , [ 1922,1936 ] , [ 1937,1949 ] , [ 1950,1967 ] , [ 1967,1988 ] . that this is the globally optimal segmentation , has been shown by hubert in using his exact segmentation procedure . a plot of the time series , indicating the 5 segments and the respective means appears in figure 2 .* figure 2 to appear here * we have verified that the hmm algorithm finds the globally optimal segmentation for all values of ( as listed in table 1 ) .we performed this verification by use of the exact dynamic programming algorithm presented in the appendix .the conclusion is that , in this experiment , the hmm segmentation algorithm finds the optimal segmentations considerably faster than the exact algorithm .specifically , running the entire experiment ( i.e. obtaining the hmm segmentations of _ all _ orders ) with a matlab implementation of the hmm segmentation algorithm took 1.1 sec on a pentium iii 1 ghz personal computer ; we expect that a fortran or c implementation would take about 10% to 20% of this time . in this experimentwe use the time series of annual mean global temperature for the years 1700 1981 . only the temperatures forthe period 1902 1981 come from actual measurements ; the remaining temperatures were _ reconstructed _ according to a procedure described in and also at the internet address ` http://www.ngdc.noaa.gov/paleo/ei/ei_intro.html ` .the length of the time series is 282 .the goal is again to find the segmentation which is optimal with respect to total deviation from the segment - means , has the highest possible order and is statistically significant according to scheffe s criterion .we run the segmentation algorithm for , using .convergence takes place in 4 iterations or less .the optimal segmentations are presented in table 2 .the segmentations which were validated by scheffe s criterion appear in bold letters .[ c]|l|l|l|l|l|l|l|l| & + 1 & * 1700 * & * 1981 * & & & & & + 2 & * 1700 * & * 1930 * & * 1981 * & & & & + 3 & * 1700 * & * 1812 * & * 1930 * & * 1981 * & & & + 4 & * 1700 * & * 1720 * & * 1812 * & * 1930 * & * 1981 * & & + 5 & 1700 & 1720 & 1812 & 1926 & 1935 & 1981 & + 6 & 1700 & 1720 & 1812 & 1926 & 1934 & 1977 & 1981 + * table 2 * hence it can be seen that the optimal and statistically significant segmentation is of order 4 , i.e. the segments are [ 1700,1720 ] , [ 1721,1812 ] , [ 1813,1930 ] , [ 1931,1981 ] . a plot of the time series , indicating the 4 segments and the respective means appears in figure 3 .* figure 3 to appear here * the _ total _ execution time for the experiment ( i.e. to obtain optimal segmentations of all orders ) is 2.97 sec .the segmentations of table 2 are the globally optimal ones , as we have verified using the dynamic programming segmentation algorithm . in this experimentwe again use the annual mean global temperature time series , but now we assume that it is generated by a _ switching regression _ hmm .specifically , we assume a model of the form where the parameters , , , are specific to the -th state of the underlying markovian process . given a particular segmentation ,these parameters can be estimated by a least squares fitting algorithm .hence the segmentation algorithm can be modified to obtain the optimal segmentation with respect to the model of ( [ eq46 ] ) .once again we run the segmentation algorithm for , using .the optimal segmentations thus obtained are presented in table 3 .[ c]|l|l|l|l|l|l|l|l| & + 1 & * 1700 * & * 1981 * & & & & & + 2 & * 1700 * & * 1926 * & * 1981 * & & & & + 3 & * 1700 * & * 1833 * & * 1926 * & * 1981 * & & & + 4 & * 1700 * & * 1769 * & * 1833 * & * 1926 * & * 1981 * & & + 5 & 1700 & 1769 & 1833 & 1895 & 1926 & 1981 & + 6 & 1700 & 1769 & 1825 & 1877 & 1904 & 1926 & 1981 + * table 3 * in this case segment validation is not performed by the scheffe criterion ; instead we use a prediction error correlation criterion .this indicates the maximum statistically significant number of segments is =4 and the segments are [ 1700,1769 ] , [ 1770,1833 ] , [ 1834,1926 ] , [ 1927,1981 ] . a plot of the time series , indicating the 4 segments and the respective autoregressions appears in figure 3 . * figure 4 to appear here *recall that the segments obtained by means - based segmentation are [ 1700,1720 ] , [ 1721 , 1812 ] , [ 1813 , 1930 ] , [ 1931 , 1981 ] .this seems to be in reasonable agreement with the ar - based segmentation , excepting the discrepancy of 1720 and 1769 . from a numerical point of view, there is no a priori reason to expect that the ar - based segmentation and means - based segmentation should give the same results .the fact that the two segmentations are in reasonable agreement , supports the hypothesis that actual climate changes have occurred approximately at the transition times indicated by both segmentation methods . finally , let us note that the _ total _ execution time for the experiment ( i.e. to obtain optimal segmentations of every order ) is 3.07 sec and that the segmentations of table 3 are the globally optimal ones , as we have verified using the dynamic programming segmentation algorithm .the goal of the final experiment is to investigate the scaling properties of the algorithm , specifically the scaling of execution time with respect to time series length and the scaling of accuracy with respect to noise in the observations . to obtain better control over these factors , artificial time seriesare used , which have been generated by the following mechanism . the time seriesare generated by a 5-th order hmm .every time series is generated by running the hmm from state no.1 until state no.5 .hence , every time series involves 5 state transitions and , for the purposes of this experiment , this is assumed to be known a priori . on the other hand , it can be seen that the length of the time series is variable . with a slight change of notation , in this section denote the _ expected _ length of the time series , which can be controlled by choice of the probability .the values of were chosen to generate time series of average lengths 200 , 250 , 500 , 750 , 1000 , 1250 , 1500 .the observations are generated by a normal distribution with mean ( = 1 , 2 , ... , 5 ) and standard deviation .in all experiments the values == = 1 , = = were used .several values of were used , namely = 0.00 , 0.10 , 0.20 , 0.30 , 0.50 , 0.75 , 1.00 , 1.25 , 1.50 , 1.75 , 2.00 . for each combination of and 20 time series were generated and the hmm segmentation algorithm was run on each one . for each runtwo quantities were computed : , accuracy of segmentation , and , execution time .segmentation accuracy is computed by the formula where the indicator function is equal to 1 when and equal to 0 otherwise . from these datatwo tables are compiled .table 4 lists ( in seconds ) as a function of ( i.e. is averaged over all time series of the same ) .table 5 lists average segmentation accuracy as a function of and ( i.e. is averaged over the 20 time series with the same and ) .as expected , segmentation accuracy is generally a decreasing function of .[ c]|l|l|l|l|l|l|l|l| & 200 & 250 & 500 & 750 & 1000 & 1250 & 1500 + & 0.193 & 0.249 & 0.585 & 1.024 & 1.845 & 3.026 & 4.60 + * table 4 . * average execution time ( in seconds ) as a function of average time series length . [ c]|l|lllllll| & 200 & & & & & & + & + 0.00 & 1.0000 & 1.0000 & 1.0000 & 0.9692 & 1.0000 & 1.0000 & 0.9902 + 0.10 & 1.0000 & 1.0000 & 1.0000 & 0.9814 & 1.0000 & 1.0000 & 1.0000 + 0.20 & 1.0000 & 0.9806 & 1.0000 & 1.0000 & 1.0000 & 0.9716 & 1.0000 + 0.30 & 1.0000 & 1.0000 & 0.9999 & 0.9792 & 1.0000 & 0.9807 & 1.0000 + 0.50 & 0.9989 & 0.9993 & 0.9994 & 0.9997 & 1.0000 & 0.9997 & 1.0000 + 0.75 & 0.9945 & 0.9979 & 0.9663 & 0.9521 & 0.9988 & 0.9992 & 0.9991 + 1.00 & 0.9881 & 0.9880 & 0.9863 & 0.9974 & 0.9517 & 0.9981 & 0.9711 + 1.25 & 0.9778 & 0.9710 & 0.9762 & 0.9924 & 0.9965 & 0.9843 & 0.9781 + 1.50 & 0.9561 & 0.9701 & 0.9874 & 0.9341 & 0.9507 & 0.9362 & 0.9956 + 1.75 & 0.9337 & 0.8985 & 0.9494 & 0.9341 & 0.9708 & 0.9272 & 0.9942 + 2.00 & 0.8628 & 0.8617 & 0.8255 & 0.9141 & 0.8600 & 0.9523 & 0.8297 + * table 5 . * average classif .accuracy as a function of average time series length and noise level .in this paper we have used hidden markov models to represent hydrological and enviromental time series with multiple change points . inspired by hubert s pioneering work and by methods of speech recognition , we have presented a fast iterative segmentation algorithm which belongs to the em family .the quality of a particular segmentation is evaluated by the deviation from segment means , but extensions involving autoregressive hmm s , trend - generating hmm s etc . can also be used . because execution time is o( ) , our algorithm can be used to explore various possible segmentations in an interactive manner .we have presented a convergence analysis which shows that under appropriate conditions every iteration of our algorithm increases the likelihood of the resulting segmentation .furthermore , numerical experiments ( involving river flow and global temperature time series ) indicate that the algorithm can be expected to converge to the _ globally _ optimal segmentation .in this appendix we present an alternative time series segmentation algorithm which , unlike the hmm algorithm , is _ guaranteed _ to produce the _ globally optimal _ segmentation of a time series .this superior performance , however , is obtained at the price of longer execution time .still , the algorithm is computationally viable for time series of several hundred terms .we describe the algorithm briefly here ; a more detailed report appears in .a _ generalization _ of the time series segmentation problem discussed in previous sections is the following . given a time series , , ... , and a fixed , find a sequence of times = , , ... , which satisfies ... = , and minimizes consists of a sum of terms . for example , hubert s cost function can be obtained by setting hence hubert s segmentation cost ( [ eq99 ] ) is a special case of ( [ eq101 ] ) .similarly , consider _ autoregressive _ models of the form where , , ... , ) and , , , , ] ( the denotes transpose of a matrix ) .then we can set then the segmentation cost becomes the , , , ( elements of ) are unknown , but can be determined by least squares fitting on , , ... , .a similar formulation can be used for regressive models of the form where = , , , ^{\prime} ] .hence we see that ( [ eq101 ] ) is sufficiently general to subsume many cost functions of practical interest . * * input : * the time series ; a termination number . * * initialization * * for _ _ * * for _ _ * * * * * end * * * end * * minimization * * for * * for * * * for * * * * * * * end * * * * * * * * end * end * * backtracking * * for * * * * for * * * * * end * * * end on termination , the dynamic programming segmentation algorithm has computed for ; in other words it has recursively solved a _ sequence _ of minimization problems . for ,the optimal segmentation = ( , , ... , ) has been obtained by backtracking .the recursive minimization is performed in the second part of the algorithm ; it is seen that computation time is o( ) .this is not as good as the o( ) obtained by the hmm algorithm ( note that usually is less than ) , but is still computationally viable for in the order of a few hundreds .the backtracking part of the algorithm has execution time o( ) . however , in many cases the computationally most expensive part of the algorithm is the initialization phase , i.e. the computation of .this involves o( ) computations of and can increase the computation cost by one or more orders of magnitude .for example , if we apply the algorithm to detect changes in the mean , then which involves addittions ; if ( [ eq111 ] ) is used in the initialization phase , then this phase requires o( ) computations and this severely limits computational viability to relatively short time series . hence , to enhance the computational viability of the dynamic programming segmentation algorithm , it is necessary to find efficient ways to perform the initialization phase . in the next two sections, we will deal with this question for two specific forms of : the first form pertains to the computation of means and the second to the computation of regressions and autoregressions .the computation of means can be performed recursively , as will now be shown . for , , we must compute for , , define the following additional quantities: then we have and from ( [ eq141 ] ) , ( [ eq142 ] ) follows that ( for , ) the above computations can be implemented in time o( ) by the following algorithm . *for _ _ * * * * * * for _ _ * * * * * * * * end * end * for _ _ * * for * * * * * end * end * for _ _ * * * * for _ _ * * * * * end * end hence , if the above code replaces the initialization phase of the dynamic programming algorithm in section [ seca02 ] , we obtain an o( ) implementation of the entire algorithm . in other words, we obtain an algorithm which , given a time series of length , computes the global minimum of hubert s segmentation cost ( for all segmentations of orders ) in time o( ) consider now autoregressive models described by ( [ eq103 ] ) .as already mentioned , in this case we have hence is given by where , , , ... , ] * * * * * * = * * * = * * * * * end * end hence , if the above code replaces the initialization phase of the dynamic programming segmentation algorithm in section [ seca02 ] , we have an o( ) implementation of the entire algorithm for autoregressive models .a similar modification is possible for regressive models of the form ( [ eq103 ] ) .baum and j.a .`` an inequality with applications to statistical estimation for probabilistic functions of markov processes and to a model for ecology '' . _ bull ._ , vol.73 , pp.360363 , 1967 .p. hubert .`` change points in meteorological analysis '' . in _ applications of __ time series analysis in astronomy and meteorology _ , t.subba rao , m.b .priestley and o. lessi ( eds . ) .chapman and hall , london , 1997 .l. perreault , m. hache , m. slivitzky and b. bobee .`` detection of changes in precipitation and runoff over eastern canada and u.s . using a bayesian approach '' .res . and risk ass ._ , vol . 13 , pp.201 - 216 , 1999 . l. perreault , e. parent , j. bernier , b. bobee and m. slivitzky .`` retrospective multivariate bayesian change - point analysis : a simultaneous single change in the mean of several hydrological sequences '' .res . and risk ass .14 , pp.243 - 261 , 2000 .l. perreault , j. bernier , b. bobee and e. parent .`` bayesian change - point analysis in hydrometeorological time series .comparison of change - point models and forecasting '' ._ j. hydrol .235 , pp.242 - 263 , 2000 .
motivated by hubert s segmentation procedure , we discuss the application of hidden markov models ( hmm ) to the segmentation of hydrological and enviromental time series . we use a hmm algorithm which segments time series of several hundred terms in a few seconds and is computationally feasible for even longer time series . the segmentation algorithm computes the maximum likelihood segmentation by use of an expectation / maximization iteration . we rigorously prove algorithm convergence and use numerical experiments , involving temperature and river discharge time series , to show that the algorithm usually converges to the globally optimal segmentation . the relation of the proposed algorithm to hubert s segmentation procedure is also discussed .
epitaxial growth is characterized by the deposition of new material on existing layers of the same material under high vacuum conditions .this technique is used in the semiconductor industry for the growth of thin films .the crystals grown may be composed of a pure chemical element like silicon or germanium , or may either be an alloy like gallium arsenide or indium phosphide . in case of molecular beam epitaxythe deposition takes place at a very slow rate and almost atom by atom .the goal in most situations of thin film growth is growing an ordered crystal structure with flat surface .but in epitaxial growth it is quite usual finding a mounded structure generated along the surface evolution .the actual origin of this mounded structure is to a large extent unknown , although some mechanisms ( like energy barriers ) have already been proposed . attempting to perform _ ab initio_ quantum mechanical calculations in this system is computationally too demanding , what opens the way to the introduction of simplified models .these have been usually developed within the realm of non - equilibrium statistical mechanics , and can be of a discrete probabilistic nature or have the form of a differential equation .discrete models usually represent adatoms ( the atoms deposited on the surfaces ) as occupying lattice sites .they are placed randomly at one such site and then they are allowed to move according to some rules which characterize the different models . a different modelling possibility is using partial differential equations , which in this field are frequently provided with stochastic forcing terms . in this workwe will focus on rigorous and numerical analyses of ordinary differential equations related to models which have been introduced in the context of epitaxial growth .we hope that a systematic mathematical study will contribute to the understanding of this sort of processes , which are relevant both in pure physics and its industrial applications , in the long term .the mathematical description of epitaxial growth uses the function which describes the height of the growing interface in the spatial point at time . although this theoretical framework can be extended to any spatial dimension , we will concentrate here on the physical situation .a basic modelling assumption is of course that is an univalued function , a fact that holds in a reasonably large number of cases .the macroscopic description of the growing interface is given by a partial differential equation for which is usually postulated using phenomenological and symmetry arguments .a prominent example of such a theory is given by the kardar - parisi - zhang equation which has been extensively studied in the physical literature and it is currently being investigated for its interesting mathematical properties .it has been argued however that epitaxial growth processes should be described by some equation coming from a conservation law and , in particular , that the term should not be present in such an equation . to this end , among others , the conservative counterpart of the kardar - parisi - zhang equation was introduced this equation is conservative in the sense that the first moment is constant if the appropriate boundary conditions are used .it can be considered as a higher order counterpart of the kardar - parisi - zhang equation , and it poses as well a number of fundamental mathematical questions . in this work we will focus on a variation of the last equation . its formal derivation will be presented in the following section .the remainder of this work will be devoted to clarify the analytical properties of the radial stationary solutions to the model under consideration .herein we will adopt a variational formulation of the surface growth equation , which has been postulated as a simple and yet physically relevant way of developing growth models . in order to proceed with our formal derivation, we will assume that the height function obeys a gradient flow equation with a forcing term .\ ] ] the functional denotes a potential which describes the microscopic properties of the interface and , at the macroscopic scale , it is assumed that it can be expressed as a function of the surface mean curvature only where the presence of the square root terms models growth along the normal to the surface , denotes the mean curvature and is an unknown function of .we will furthermore assume that this function can be expanded in a power series and subsequently formally apply the small gradient expansion , which assumes .this is a classical approximation in this physical context and it is basic in the derivation of the kardar - parisi - zhang equation among others . in the resulting equation , only linear and quadratic terms in the field and its derivatives are retained , as higher order nonlinearities are assumed not to be relevant in the large scale description of a growing interface .the final result reads which is , as well as ( [ ssg ] ) , a conservative equation in the sense that is constant if appropriate boundary conditions are used .we note that powers of the mean curvature higher than the cubic one in expansion ( [ expansion ] ) do not contribute to equation ( [ parabolic ] ) as they imply cubic or higher nonlinearities of the field or its derivatives .the terms in equation ( [ parabolic ] ) have a clear geometrical meaning .the term proportional to is the result of the minimization of the zeroth order of the mean curvature , that is , it corresponds to the minimization of the surface area .its functional form simply reduces to standard diffusion .the term proportional to comes from the minimization of the mean curvature and actually it is the determinant of the hessian matrix , which is nothing but the small gradient approximation of the surface gaussian curvature .so we see that , through the small gradient approximation , _ a gradient flow pursuing the minimization of the mean curvature leads to a evolution which favors the growth of the gaussian curvature_. the term proportional to comes from the minimization of the squared mean curvature . a functional involving the squared mean curvature is known as willmore functional and it has its own status within differential geometry .the bilaplacian accompanying is the corresponding linearized euler - lagrange equation of the willmore functional when looking for flat minimizers , and it has already appeared in the context of mathematical elasticity .finally the term proportional to comes from the minimization of the cubic power of the mean curvature and it involves a nonlinear combination of laplacians of the field .we note that from a more puristic geometrical viewpoint one would retain only even powers of the mean curvature in expansion ( [ expansion ] ) , which would give rise to a symmetric solution to the corresponding simplification of equation ( [ parabolic ] ) ( i. e. , a solution invariant to the transformation ) .however , from a physical viewpoint , we are seeking for a solution to a partial differential equation which represents the interface between two different media ( solid structure and vacuum in the present case ) so this symmetry is not guaranteed a priori , and we need to retain the odd powers of the mean curvature in expansion ( [ expansion ] ) . for our current purposes we will focus on the associated stationary problem to a simplification of equation ( [ parabolic ] ) . such an equation can be obtained employing well known facts from the theory of non - equilibrium surface growth .we may invoke classical scaling arguments in the physical literature to disregard the last term as a higher order correction which will not be present in the description of the largest scale properties of the evolving surface .this practically reduces to setting in equation ( [ parabolic ] ) . in epitaxial growthone may phenomenologically set , and we will assume so for the rest of this work .the underlying physical reason is that the diffusion proportional to is triggered by the effect of gravity on adatoms , and this effect is negligible in the case of epitaxial growth .the resulting equation reads this partial differential equation can be thought of as been an analogue of equation ( [ ssg ] ) .indeed , it has been shown that this equation might constitute a suitable description of epitaxial growth in the same sense equation ( [ ssg ] ) is so , and it even shows more intuitive geometric properties .so , at the physical level , we can consider equation ( [ parabolic2 ] ) as a higher order conservative counterpart of the kardar - parisi - zhang equation . at the mathematical level we can consider it as a sort of gaussian curvature flow which is stabilized by means of a higher order viscosity term .furthermore , this viscosity term , as we have seen , has a clear geometrical meaning .as we explain above , in this work we are concerned with the stationary version of ( [ parabolic2 ] ) , which reads after getting rid of the equation constant parameters by means of a trivial re - scaling of field and coordinates .our last assumption is that the forcing term is time independent .this type of forcing is known in the physical literature as columnar disorder , and it has an actual experimental meaning within the context of non - equilibrium statistical mechanics .the constant is a measure of the intensity of the rate at which new particles are deposited , and for physical reasons we assume and .we will devote our efforts to rigorously and numerically clarify the existence and multiplicity of solutions to this elliptic problem when set on a radially symmetric domain .we start looking for radially symmetric solutions of boundary value problem ( [ pro0 ] ) with , where is the radial coordinate , and homogeneous dirichlet boundary conditions .we set the problem on the unit disk .that is , we look for solutions of the form where by means of a direct substitution we find ' \right\ } ' = \frac{1}{r } \ , \tilde{u } ' \tilde{u } '' + \lambda f(r),\ ] ] where , and the conditions , , , and ; the first one imposes the existence of an extremum at the origin and the second and third ones are the actual boundary conditions .the fourth boundary condition is technical and imposes higher regularity at the origin .if this condition were removed this would open the possibility of constructing functions whose second derivative had a peak at the origin .this would in turn imply the presence of a measure at the origin when calculating the fourth derivative of such an , so this type of function can not be considered as an acceptable solution of ( [ fullradial ] ) whenever is a function . throughout this sectionwe will assume , r \, dr) ] , which is the closure of the space of radially symmetric smooth functions compactly supported inside the unit ball of with the norm of , r \ , dr) ]but it is otherwise arbitrary .the existence and multiplicity of solutions to our boundary value problem will be obtained by searching critical points of functional ( [ radialfunc ] ) .we start proving a result concerning the geometry of this functional .functional ( [ radialfunc ] ) admits the following radial ( in the sobolev space ) lower bound : and stands for the radial two - dimensional measure .we have the following chain of inequalities ^{1/2 } \ge \\ \nonumber \\ \nonumber \frac{1}{2 }\int_0 ^ 1 \left ( u '' \right)^2 r \ , dr - c_1 \left [ \int_0 ^ 1 \left ( u '' \right)^2 r \ , dr \right]^{3/2 } - c_2 \ , \lambda \ , ||f||_{l^1(\mu ) } \left [ \int_0 ^ 1 \left ( u '' \right)^2 r \ , dr \right]^{1/2 } = \\ \nonumber \\ \frac{1}{2 } \ , || u''||_{l^2(\mu)}^2 - c_1 \ , u''||_{l^2(\mu)},\end{aligned}\ ] ] where we have used that ] such that the following properties are fulfilled : * * therefore we find for small enough and for large enough .consequently the geometric requirements of the _ mountain pass _ theorem are fulfilled .now we move to prove the compactness requirements .we start verifying a local palais - smale condition for our functional .we say , r \ ,dr) ] . since , r \ , dr) ] , * strongly in , r \ , dr) ] .we write the convergence condition in , r \ , dr)\}^* ] then verifies a local palais - smale condition at the level .property i. is obvious .for small enough the lower radial bound of attains a maximum at a positive level of `` energy '' for .we denote as the smaller root of and as the location of the maximum .now we choose and .functional admits the following radial lower bound where and are the same constants as in lemma 3.2 .so this functional is bounded from below and positive for .thus property ii .is fulfilled .property iii . follows from the fact that all palais - smale sequences of minimizers of this functional are bounded since together with an application of proposition [ compactness ] .now we state the main result of this section : there exists a positive real number such that for dirichlet problem ( [ fullradial ] ) has at least two solutions .the functional is well defined in , r \ ,dr) ] large enough such that .we introduce the set of paths in the banach space , \mathring{w}^{2,2}([0,1 ] , r \ , dr ) \right ) \right| \ , \theta(0)=u^{(0 ) } , \ , \theta(1)=u^{(2 ) } \right\}.\ ] ] we introduce as well the value } j_\lambda[\theta(s)],\ ] ] and apply ekeland s variational principle to prove the existence of a palais - smale sequence at it .this means there exists a sequence , r \ , dr) ] .we must now prove that this palais - smale sequence is bounded .for , r \ , dr) ] palais - smale sequence for at level and denote to find r \ , dr - \frac{2}{3 } c_2 \ , \lambda \ , ||f||_{l^1(\mu ) } \ , ||u_n''||_{l^2(\mu ) } + \frac{1}{3 } \langle z_n , u_n \rangle \ge\ ] ] for a suitable positive constant , large enough and small enough . in consequencethe sequence is bounded in , r \ , dr) ] . in this sectionwe consider again problem ( [ fullradial ] ) on the unit interval but this time subjected to navier boundary conditions . inthe radial setting these conditions translate to and , and we also assume the extremum condition at the origin for symmetry reasons .we again assume , r \ , dr) ] , which we define as the intersection , r \ , dr ) \cap \mathring{w}^{1,2}([0,1 ] , r \ , dr) ] but it is otherwise arbitrary . now we prove a result concerning the geometry of .first we note that both and are well defined in ,r \ ,dr) ] whose second derivative ( ) and first derivative normalized by the independent variable ( ) are square integrable on the unit interval against measure , as can be seen by means of a direct application of the sobolev inequalities .let ,r \ , dr) ] . since , r \ , dr) ] , * strongly in , r \ , dr) ] .we write the convergence condition in , r \ ,dr)\}^* ] as the sobolev inequalities immediately reveal . as in the previous section, we will prove the existence of two solutions to our boundary value problem by finding two critical points of functional , one of them is a negative local minimum and the other one is a positive mountain pass critical point .the proof of existence of the minimum is identical in both cases , so it will not be reproduced herein .so we concentrate in proving the existence of the positive mountain pass critical point .we employ the same minimax technique as in the previous section and the existence of a palais - smale sequence , r \ , dr) ] , where is the critical mountain pass level .we must now prove that this palais - smale sequence is bounded .for , r \ , dr) ] palais - smale sequence for at level and denote to find latexmath:[\[\frac{1}{6 } \int_0 ^ 1 \left ( u_n '' + \frac{u_n'}{r } \right)^2 r \ , dr - \frac{2}{3 } c_2 \ , \lambda \ , ||f||_{l^1(\mu ) } \ , for a suitable positive constant , large enough and small enough . in consequencethe sequence is bounded in , r \ , dr) ] .so far we have proven the existence of at least two solutions to both dirichlet and navier problems . in this sectionwe will clarify the nature of these solutions by means of numerically solving the boundary value problems employing a shooting method .our first step will be transforming differential equation into a form more suitable for the numerical treatment . to this end and from now on we will assume .integrating once equation against measure and using boundary condition yields ' = \frac{1}{2 } ( \tilde{u}')^2 + \frac{1}{2 } \lambda r^2.\ ] ] by changing variables we find the equation we have performed some numerical simulations with the final value problem for this ordinary differential equation using a fourth - order runge - kutta method .we have employed the final conditions and arbitrary , which correspond to dirichlet boundary conditions , to check how big could be in order to have solutions .we have solved this problem for ] and we have searched for solutions such that , which corresponds to the extremum condition for the original differential equation . using this shooting methodwe have found two different solutions which fulfill these requirements .one observes that for there are one trivial and one non - trivial solutions . for are two non - trivial solutions which approach each other for increasing . for more solutions were numerically found .the critical value of was numerically estimated to be .again , the smaller solution corresponds to a minimum of the `` energy '' functional and the larger solution corresponds to a mountain pass critical point . in all casesthe minimum solution is strictly smaller than the mountain pass solution for all .we have analyzed a differential equation appearing in the physical theory of epitaxial growth . we have started formally introducing the corresponding partial differential equation and then we have focused on radial solutions to its stationary counterpart .the resulting equation has been posed in the unit disk in the plane subjected to two different sets of boundary conditions .we have proven the existence of at least two solutions to both boundary value problems for small enough data . in each problemwe have observed both solutions numerically and identified one of them with the local minimum of our `` energy '' functional and the other one with a mountain pass critical point . due to the qualitatively similar results in both cases , the following assertions , and in particular the conjectures , refer to both boundary value problems .our numerical simulations have revealed that the solutions are ordered in the sense that the one corresponding to the minimum lies strictly below ( except for the boundary point ) the one corresponding to the mountain pass critical point .we have found the mountain pass solution is nontrivial for and the minimum solution is nontrivial for and trivial for .we have also proven nonexistence of solutions for large values of this parameter and we have found rigorous bounds for the size of the data separating existence from nonexistence , but the proofs will be reported elsewhere .we conjecture the solution corresponding to the minimum is dynamically stable : if we considered the full evolution problem we would find this solution is locally stable for it .we also conjecture the mountain pass solution is dynamically unstable .we have numerically observed both solutions become closer for approaching the critical value separating existence from nonexistence , so we conjecture that the transition from existence to nonexistence as we vary the parameter is a saddle - node bifurcation for the corresponding evolution problem .we finally conjecture there exists a unique solution , that is dynamically unstable , for the critical value of , precisely the one that corresponds to the bifurcation threshold . on the physical side ,our results can be interpreted within the theory of nonequilibrium potentials .the evolution problems correspond to gradient flows pursuing the minimization of our `` energy '' functionals , that play the role of nonequilibrium potentials .if both forcing term and initial condition are small the system will evolve towards the equilibrium state .if the forcing were stochastic the equilibrium state would become metastable . for a large forcing termthere are no equilibrium states , so the system will keep on evolving forever in a genuine nonequilibrium fashion . in the theory of nonequilibrium growth ,in which the forcing is normally assumed stochastic , it is known that these features affect both morphology and dynamics of the evolving interface . in the case of existence of a local minimumthis would imply in turn the existence of transient behavior , as found in different models of epitaxial growth .nonexistence of this state would mean that the asymptotic state is rapidly achieved .residence times could be estimated with the help of the theory of nonequilibrium potentials .our results constitute a first step towards the understanding of these phenomena , although more work is needed in order to get a full understanding of them .b. abdellaoui , a. dallaglio , and i. peral , _ regularity and nonuniqueness results for parabolic problems arising in some physical models , having natural growth in the gradient _ , j. math .pures appl .* 90 * ( 2008 ) 242269 .j. garca azorero and i. peral , _ multiplicity of solutions for elliptics problems with critical exponents or with a non - symmetric term _ , transactions of the american mathematical society * 323 * ( 1991 ) 877895 .
we present the formal geometric derivation of a nonequilibrium growth model that takes the form of a parabolic partial differential equation . subsequently , we study its stationary radial solutions by means of variational techniques . our results depend on the size of a parameter that plays the role of the strength of forcing . for small forcing we prove the existence and multiplicity of solutions to the elliptic problem . we discuss our results in the context of nonequilibrium statistical mechanics .
activity of brain regions , car flow on roads , meta - population epidemic , all these arguably very different systems have in common that they can be represented as the activity of a quantity of interest on the nodes of a network .the coupling between the dynamics on the nodes and their network of interactions often leads to emergent collective states . in simple cases , such as the kuramoto and spin models, these macroscopic states can be classified according to the behaviour of an order parameter that measures the global coherence of the units comprising the system .unfortunately , this order parameter is blind to the structure of the underlying interaction network , and does not allow to investigate how the system behaves at different structural scales . to gain a better understanding of the effect of network structure on its activity , we need a method to characterise a macroscopic state that combines both aspects . having such a method will then help us to understand the functioning and mitigate disruption of complex systems or even engineer new ones . in this paper , we consider the problem in the spectral domain by exploiting tools from temporal graph signal analysis .in particular , we show that the collective patterns of a dynamical system can be robustly characterised by decomposing its nodal activity in an adequate basis associated to its _ structural properties_. to illustrate our method , we will study the spin model on complex networks .spin models are paradigmatic examples of systems where pairwise interactions give rise to emergent , macroscopic stationary states .historically , the behaviour of these models was studied on lattices , but other types of topologies have been considered in recent years and , in particular , researchers have investigated the effect of the topology on equilibrium and out of equilibrium states .remarkably , a new stationary state has been observed on complex networks in addition to the well - known non - magnetised and magnetised states : the supra - oscillating state , in which magnetisation coherently oscillates indefinitely .interestingly , these three states can be found on several network models , which suggests that the topology constrains the xy spins in a given phase .although it is possible to analytically connect the parameters of the network models to thermodynamics in some simple situations , the nonlinearity of the interactions in the model complicates the construction of a direct , general theory linking the underlying network topology and the phenomenology .it is therefore desirable to develop a framework that links the structure to the dynamics .we are in a situation where different topologies give rise to the same macroscopic states , and thus in a perfect setting to explore the interplay between structure and dynamics , using a network theoretical approach .as we are studying a global emergent property of a system , it is natural to seek a description that uses system - wide features of the underlying network . to explore the relationship between the structure of the network and the evolution of the individual spins , we leverage the spectral properties of networks and use the temporal graph signal transform ( tgst ) .temporal graph signal transform is an extension to time series of the graph signal transform , a method introduced in for static signals on complex networks .we use tgst to decompose the time series of the spins in the laplacian eigenbasis , which carries information about the structure of the network .graph signal transform in different forms has already been applied in the context of signal analysis such as fmri time series or image compression as well as graph characterization and community detection .the crucial feature of the tgst , which explains its power and versatility , is its ability to analyse data on irregular domains such as complex networks . by using tgst, we can quantify the importance of each eigenmode by computing the spatial power spectrum .we find that irrespective of the specific topology , the functional form of the power spectrum characterises a state .this clearly shows that a selection of modes is at play . in this paper, we will show that the dynamics resonates with specific graph substructures , leading to the same macroscopic state .this paper is structured as follows : we briefly introduce the spin model in sec .[ subsec : model ] along with the several macroscopic behaviours it displays on networks in sec .[ subsec : phenomena ] ; we then proceed by introducing the general framework of the graph signal transform in sec .[ subsec : gst ] and finally present and discuss our findings in sec .[ sec : results ] .we consider the spin model , a well known model in statistical mechanics , on various network topologies . in this model ,the dynamics of the spins is parametrised by an angle and its canonically associated momentum .each spin is then located on a network vertex and interacts with the spins in , the set of vertices connected to .the hamiltonian of the system reads : where ] by removing their temporal mean value .we therefore express the detrended time series using the decomposition in eq .[ eq : gst_x ] and compute the power spectrum , and considered its temporal average which we refer to as the spatial power spectrum .we can thus associate a power spectra to each macroscopic phase .we will now detail the laplacian spectra and power spectra for each macroscopic state shown in fig .[ fig : spectra_powa_across ] .we observe that the power spectra for each state are strikingly similar for the different topologies .this shows that the spatial power spectrum is driven by some specific substructures , that are must be common to all networks on which the dynamics is run .+ _ magnetised _ the laplacian spectrum for the -regular graph is highly degenerated , reflecting the regularity of the network , while the spectra for the ws and the lace networks are very similar , showing similar structure and non - degeneracy due the the random rewiring .the power spectra are unsurprisingly largely dominated by the first eigenvalue as it represents the constant component of the signal which dominates as the magnetisation is essentially constant .while the spectra for the lace and ws networks are very close , the spectra for the -regular network is very different , strengthening our point that specific substructure , potentially independent of the network model considered drive the macroscopic dynamics .+ _ non - magnetised _ this state is akin to a random state , as the spins only weakly interact and no long - range order is present .this is directly reflected by the small contributions from all eigenmodes , as there is barely an order of magnitude difference between the largest and smallest amplitudes .the contribution of the eigenvalues decay exponentially , and the spectra for the three network models are very similar , the lace and ws spectra are even quasi overlapping .+ _ supra - oscillating _ the power spectrum signature of the state , that only exists for the -regular and lace network is the most interesting . the fat tail and its seemingly power law decrease hints to a notion of hierarchy in the spatial modes that explain the non tameable oscillating patterns , as the magnetisation is eventually a result of the superposition of all the spatial modes .it is interesting to note that although the contribution of the eignenmodes decreases with the eigenvalues , they do so non - monotonically .we investigate this phenomena in the next section .the ws and lace networks have an element of randomness in their construction .it is therefore crucial to verify that the properties of the spectra and power spectra we observed in the previous section are not accidental , but genuinely representative of a class of networks .we generated realisations of the two types of networks in each state they can support , see fig .[ fig : spectra_powa_stat ] .the spectra of the laplacians are remarkably consistent , as shown by the small error bars .this small magnitude of the variability of the eigenvalues across realisations justifies the averaging of the power spectra across realisations .it is remarkable that this variance , affecting particular structures of the networks , does not have any effect on the power spectra , as they are all very consistent with low variance , except for some noise at the beginning of the power spectra .the emerging macroscopic properties are not affected by the local differences induced by the variance and the structural differences between the lace and ws , that make lace networks support the supra - oscillating state and not the ws , are robust to the noise that is introduced by different realisations of the same network model .earlier , we pointed out the non - monotonic decrease of the eigenmodes amplitude with the eigenvalues , on top of a clear overall decreasing trend . on the one hand , these fluctuations could be due to stochastic effects of particular network realisation which are ironed out when an ensemble average is taken . on the other hand , they could be genuine and due solely to the dynamics . to investigate the cause of these fluctuations, we averaged the power spectra for the different realisation of lace networks , and observe that both scenarios happen .in the case of the magnetised and supra - oscillating states , the curves becomes very smooth and decreases monotonically , and the non - magnetised power spectrum remains intrinsically noisy .this is not particularly surprising , as the first two states contain some degree of order and even a handful of realisations are enough to even out the fluctuations . on the contrary ,the behaviour of the spins in the non - magnetised state is essentially uncorrelated .this randomness is heightened by the randomness inherent to the generation of the lace networks , and the power spectrum strongly carries the mark of this structural randomness , contrary to the case of the two other states , where the temporal structure , induced by underlying network structure , is enough to cancel the variations in the structure . finally , in fig .[ fig : powa_spectra_size_lace ] , we present evidence that the power spectral signatures for the lace networks are not due to finite size effects .the shape of the power spectra and the relative importance of the eigenmodes are consistent for networks of sizes .in this paper , we presented the temporal graph signal transform , a method to decompose time dependent signals living on the nodes of a network , using a basis that incorporate structural information .we applied tgst to the time series of the spins of the spin model in its three possible macroscopic states on three different network topologies .we found clear spatial power spectral signatures that characterise each state .importantly , these signatures are robust across topologies and to structural variability in different realisations of the watts - strogatz and lace networks . in all cases ,the power spectra are dominated by small eigenvalues , that correspond to larger structures .the shape of the power spectra and their decrease reflect the behaviour of the macroscopic magnetisation of the three states in fig .[ fig : macro - states ] : the only significant contribution of the magnetised state is the constant eigenvector ; the non - magnetised state is also dominated the constant eigenvector , but there are non negligible contribution from higher modes , whose power decays exponentially .this is consistent with the notion that in the non - magnetised state , the spins oscillate in a random fashion .finally , the power spectrum of the supra - oscillating state displays a power law like decay , hinting that a hierarchy of modes exists and elucidating the origin of this state .these results offer a new avenue to characterise not only macroscopic states in statistical mechanics models but also the behaviour of real world system .this technique is powerful enough to circumvent traditional problems such as the need to use finite size scaling to take into account finite size effects .this study constitutes the first step to quantify and identify key network features that support collective states .we are now investigating the characterisation of the structures of the eigenvectors to clearly pinpoints the key mesoscopic structures that supports the dynamics , effectively constituting a centrality measure for network structures. a parallel line of investigation is the combination of spatial and temporal frequencies to define dispersion relations for networks , potentially giving a simple criterion to classify networks .p.e . acknowledges financial support from a pet methodology programme grant from mrcuk ( ref no .g1100809/1 ) .p.e . and t.t .acknowledge financial support for reciprocal visits from a daiwa small grant from the daiwa foundation .this work was partly supported by bilateral joint research project between jsps , japan , and f.r.s.fnrs , belgium .27ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1109/msp.2012.2235192 [ * * , ( ) ] in _ _ ( , ) pp . in _ _( , ) pp . * * , ( ) * * , ( ) ( ) * * , ( ) link:\doibase 10.1007/978 - 3 - 642 - 69689 - 3 [ _ _ ] , , vol .( , , ) p. link:\doibase 10.1016/j.physrep.2015.10.008 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.80.2109 [ * * , ( ) ] link:\doibase 10.1103/physreve.61.5080 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.89.054101 [ * * , ( ) ] link:\doibase 10.1103/physreve.75.027104 [ * * , ( ) ] link:\doibase 10.1103/physreve.77.031102 [ * * , ( ) ] link:\doibase 10.1103/physreve.82.066202 [ * * , ( ) ] link:\doibase 10.1016/j.physd.2013.04.006 [ * * , ( ) ] link:\doibase 10.1140/epjb / e2009 - 00078 - 6 [ * * , ( ) ] * * , ( )
there is recent evidence that the xy spin model on complex networks can display three different macroscopic states in response to the topology of the network underpinning the interactions of the spins . in this work , we present a novel way to characterise the macroscopic states of the xy spin model based on the spectral decomposition of time series using topological information about the underlying networks . we use three different classes of networks to generate time series of the spins for the three possible macroscopic states . we then use the temporal graph signal transform technique to decompose the time series of the spins on the eigenbasis of the laplacian . from this decomposition , we produce spatial power spectra , which summarise the activation of structural modes by the non - linear dynamics , and thus coherent patterns of activity of the spins . these signatures of the macroscopic states are independent of the underlying networks and can thus be used as universal signatures for the macroscopic states . this work opens new avenues to analyse and characterise dynamics on complex networks using temporal graph signal analysis .
an experiment was conducted to study the bouncing behavior of silicone oil drops on a vertically oscillating shallow tray of the same liquid .the simple bouncing , period - doubled bouncing and walking regimes as well as chaotic and intermittent behaviors were observed and occurred under conditions very similar to those of previous work .the influence of depth on the trajectories of drops walking over a submerged obstacle was investigated . the local change of depth associated with the obstacle led to a range of possible trajectories , including straight crossing , reflection from the aft face of the obstacle and trapping of the droplet above the obstacle .a reduction in walking velocity occurred in all cases .the different behaviors were dependent on the approach velocity of the drops ; the change of depth influenced the decay time of the standing waves generated during bouncing , thus producing changes to the walking speed and direction .a circular aluminum tray of 6 inches in diameter was filled with silicone oil ( _ clearco _psf-50cst ) and mounted on a vibration exciter .the depth of the oil film was greater than 4 mm so that the faraday waves were independent of this parameter .an accelerometer was installed on the side of the tray to measure the amplitude and frequency of motion .the influence of a local change in depth was studied by gluing a submerged obstacle , namely an aluminum strip with height mm , width mm and length 120 mm , to the bottom of the tray along its diameter . in order to ensure that the drops would walk toward the obstacle ,two guides were placed forming a triangular shape with the obstacle .the height of these guides exceeded the maximum depth of the oil so that drops would reflect off them if they got close .the tray was filled with silicone oil until the surface was 2 mm above the obstacle , mm to create drops of the desired size , a needle or a sharp toothpick ( mm in diameter ) was plunged inside the oil tray and rapidly pulled out . when the resultant liquid ligament detached from the surface , it either produced one small droplet or a set of droplets with distributed sizes via the plateau - rayleigh instability , depending on the rate at which the liquid bridge was elongated .this technique produced small drops ( generally mm ) .high and low resolution versions , respectively , of the video entries to the 2013 gallery of fluid motion are provided in ancillary files + + _ v102356_influencelocalchangedepth_bouncingdrop _ and + _v102356_influencelocalchangedepth_bouncingdrop_small_. + in these movies , the following phenomena are shown : * simple bouncing ( bath of uniform depth ) * double bouncing ( bath of uniform depth ) * walking ( bath of uniform depth ) * walking ( local change in bath depth ) , with the following specific behaviors 1 . crossing 2 .rebound 3 . trapping
the work of couder _ et al _ ( see also bush _ et al _ ) inspired consideration of the impact of a submerged obstacle , providing a local change of depth , on the behavior of oil drops in the bouncing regime . in the linked videos , we recreate some of their results for a drop bouncing on a uniform depth bath of the same liquid undergoing vertical oscillations just below the conditions for faraday instability , and show a range of new behaviors associated with change of depth . this article accompanies a fluid dynamics video entered into the gallery of fluid motion of the 66th annual meeting of the aps division of fluid dynamics .
biological evolution presents many problems concerning highly nonlinear , nonequilibrium systems of interacting entities that are well suited for study by methods from statistical mechanics .an excellent review of this rapidly growing field is found in ref . .among these problems are those concerned with _ coevolution _ : the simultaneous evolution of many species , whose mutual interactions produce a constantly changing _ fitness landscape_. throughout this process , new species are created and old species go extinct . a question that is still debated in evolutionary biology is whether evolution on the macroscale proceeds gradually or in ` fits and starts , ' as suggested by eldredge and gould. in the latter mode , known as _ punctuated equilibria _ , species and communities appear to remain largely unchanged for long periods of time , interrupted by brief ( on a geological timescale ) periods of mass extinctions and rapid change .a coevolution process involves a large range of timescales , from the ecologically relevant scales of a few generations , to geological scales of millions or billions of generations .traditionally , models of macroevolution have been constructed on a highly coarse - grained timescale .( the best - known such model to physicists is probably the bak - sneppen model. ) however , the long - time dynamics of the evolution is clearly driven by ecological processes , mutations , and selection at comparatively short timescales . as a result , in recent years several new models have been proposed that are designed to span the disparate ecological and evolutionary timescales .these models include the webworld model, the tangled - nature model, and simplified versions of the latter. in this paper i discuss and compare some of the properties of fluctuations in two simplified coevolution models : the model introduced in ref . and a different model that i am currently developing. the rest of this paper is organized as follows . in sec . [sec : mod ] i introduce the two models , in sec .[ sec : res ] i compare and discuss numerical results from large - scale monte carlo simulations for the different models , and in sec . [ sec : conc ] i present a summary and conclusions .both of the models studied here consider haploid species whose genome is represented by a bit string of length , so that there are a total of potential species , labeled by an index ] .if and , species is a predator and its prey , or _ vice versa _ , while and both positive denotes mutualism or symbiosis , and both negative denote competition .once is chosen at the beginning of a simulation , it remains fixed ( quenched randomness " ) .these interactions ( together with other , model - specific parameters ) determine whether or not a species will have a sufficient reproduction probability to be successful _ in a particular community _ of other species .typically , only a small subset of species have nonzero populations at any one time , forming a community .* model a : * + in model a , which was introduced and studied in refs ., takes the form where is the total population at . in this model , the verhulst factor prevents the population from indefinite growth and can be seen as representing an environmental carrying capacity." the interaction matrix has elements that are randomly distributed over ] at the beginning of the simulation and kept fixed thereafter . here , model b will be simulated with the following parameter values : ( see below ) , , and .only 5% of the potential species are producers ( i.e. , have ) , and in order to obtain food webs with more realistic connectivity , 90% of the pairs are randomly chosen to be zero , giving a _ connectance_ of 10% , while the elements of the nonzero pairs are chosen on $ ] as described above .both models are sufficiently simple that the populations and stabilities of fixed - point communities can be obtained analytically for zero mutation rate .these analytical results are extensively compared with simulations in refs . , and .for the purposes of the present study i just mention that the analytical stability results yield intervals for the fecundity , inside which it is ensured that a fixed - point community in the absence of mutations is stable toward small deviations in all directions .the values of used here are chosen to yield such stable fixed - point communities for both models . [ cols="^ " , ]in this paper i have discussed the fluctuations in two different , simplified models of biological macroevolution that both are based on the birth / death behavior of single individuals on the ecological timescale . on evolutionary timescalesboth models give rise to power - law distributions of characteristic waiting times and power spectra that exhibit like flicker noise , in agreement with some interpretations of the fossil record. the main difference in the construction of the models is that in model a the population size is controlled by a verhulst factor , while in model b the population is maintained by a small percentage of the species that have the ability to directly utilize an external resource ( producers or autotrophs ) .all other species must maintain themselves as predators ( consumers or heterotrophs ) .in both models the probability density of the lifetimes of individual species follows a 1/time power law and , consistently , the power spectra of the time series of diversity and extinction sizes show noise .however , the probability density of quiet periods , defined as the times between events when the magnitude of the logarithmic derivative of the diversity exceeds a given cutoff , behaves differently in the two models . in model b, the quiet - period distribution goes as 1/time , consistent with the universality class of the zero - dimensional bak - sneppen model .in contrast , in model a the behavior is proportional to 1/time , like the lifetimes of individual species .the difference between the behaviors can be linked to the lower degree of synchronization between extinction events in model b. it remains a topic for further research to see whether this difference extends to more realistic modifications of model b. i thank r. k. p. zia for a pleasant and fruitful collaboration in formulating and studying model a , v. sevim for the data on model a that appear in fig .[ fig : dur](b ) , and j. w. lee for useful discussions .this work was supported in part by the u.s . national science foundation through grantsdmr-0240078 and dmr-0444051 , and by florida state university through the school of computational science , the center for materials research and technology , and the national high magnetic field laboratory .b. drossel , a. mckane , and c. quince , `` the impact of non - linear functional responses on the long - term evolution of food web structure , '' _ j. theor_ * 229 * , pp . 539548 , 2004 . m. hall , k. christensen , s. a. di collobiano , and h. j. jensen , `` time - dependent extinction rate and species abundance in a tangled - nature model of biological evolution , '' _ phys .* 66 * , art .011904 , 2002 .p. a. rikvold and r. k. p. zia , `` punctuated equilibria and noise in a biological coevolution model with individual - based dynamics , '' _ phys .e _ * 68 * , art .031913 , 2003 .r. k. p. zia and p. a. rikvold , `` fluctuations and correlations in an individual - based model of biological coevolution , '' _ j. phys .a _ * 37 * , pp .51355155 , 2004 .v. sevim and p. a. rikvold , `` a biological coevolution model with correlated individual - based dynamics , '' in _computer simulation studies in condensed matter physics xvii _, d. p. landau , s. p. lewis , and h .- b .schttler , eds ., springer - verlag , berlin , in press .e - print arxiv : q - bio.pe/0403042 .
fluctuations in diversity and extinction sizes are discussed and compared for two different , individual - based models of biological coevolution . both models display power - law distributions for various quantities of evolutionary interest , such as the lifetimes of individual species , the quiet periods between evolutionary upheavals larger than a given cutoff , and the sizes of extinction events . time series of the diversity and measures of the size of extinctions give rise to flicker noise . surprisingly , the power - law behaviors of the probability densities of quiet periods in the two models differ , while the distributions of the lifetimes of individual species are the same .
the method of likelihood introduced by fisher is certainly one of the most commonly used techniques for parametric models .the likelihood has been also shown to be very useful in non - parametric context .more concretely owen ( 1988 , 1990 , 1991 ) introduced the empirical likelihood ratio statistics for non - parametric problems .two sample problems are frequently encountered in many areas of statistics , generally performed under the assumption of normality .the most commonly used test in this connection is the two sample -test for the equality of means , performed under the assumption of equality of variances .if the variances are unknown , we have the so - called behrens - fisher problem .it is well - known that the two sample -test has cone major drawback ; it is highly sensitive to deviations from the ideal conditions , and may perform miserably under model misspecification and the presence of outliers .recently basu et al .( 2014 ) presented a new family of test statistics to overcome the problem of non - robustness of the -statistic .empirical likelihood methods for two - sample problems have been studied by different researchers since owen ( 1988 ) introduced the empirical likelihood as a non - parametric likelihood - based alternative approach to inference on the mean of a single population .the monograph of owen ( 2001 ) is an excellent overview of developments on empirical likelihood and considers a multi - sample empirical likelihood theorem , which includes the two - sample problem as a special case .some important contributions for the two - sample problem are given in owen ( 1991 ) , adimiri ( 1995 ) , jin ( 1995 ) , qin ( 1994 , 1998 ) , qin and zhao ( 2000 ) , zhang ( 2000 ) , liu et al .( 2008 ) , baklizi and kibria ( 2009 ) , wu and yan ( 2012 ) and references therein .consider two independent unidimensional random variables with unknown mean and variance and with unknown mean and variance .let be a random sample of size from the population denoted by , with distribution function , and be a random sample of size from the population denoted by , with distribution function .we shall assume that and are unknown , therefore we are interested in a non - parametric approach , more concretely we shall use empirical likelihood methods .if we denote and , our interest will be in testing being a known real number .since becomes the parameter of interest , apart from testing ( [ 1 ] ) , we might also be interested in constructing the confidence interval for . in this paperwe are going to introduce a new family of empirical test statistics for the two - sample problem introduced in ( [ 1 ] ) : empirical phi - divergence test statistics .this family of test statistics is based on phi - divergence measures and it contains the empirical log - likelihood ratio test statistic as a particular case . in this sense , we can think that the family of empirical phi - divergence test statistics presented and studied in this paper is a generalization of the empirical log - likelihood ratio statistic .let , assume that and , a realization of , .we denote and with , and , . the empirical log - likelihood ratio statistic for testing ( [ 1 ] )is given by using the standard lagrange multiplier method we might obtain , as well as . for ,taking derivatives on we obtain and therefore , the empirical maximum likelihood estimates , and of , and , under , are obtained as the solution of the equations{l}\dfrac{1}{m}{\textstyle\sum\limits_{i=1}^{m } } \frac{1}{1+\lambda_{1}\left ( x_{i}-\mu\right ) } = 1\\ \dfrac{1}{n}{\textstyle\sum\limits_{j=1}^{n } } \frac{1}{1+\lambda_{2}\left ( y_{j}-\mu-\delta_{0}\right ) } = 1\\ m\lambda_{1}+n\lambda_{2}=0 \end{array } \right . , \label{3bis}\ ] ] and in relation , taking derivatives on we have and therefore , the empirical log - likelihood ratio statistic ( [ c ] ) , for testing ( [ 1 ] ) , can be written as under some regularity conditions , jing ( 1995 ) established that where is the -th percentile of the distribution . our interest in this paper is to study the problem of testing given in ( [ 1 ] ) and at the same time to construct confidence intervals for on the basis of the empirical phi - divergence test statistics .empirical phi - divergence test statistics in the context of the empirical likelihood have studied by baggerly ( 1998 ) , broniatowski and keizou ( 2012 ) , balakhrishnan et al .( 2013 ) , felipe et al .( 2015 ) and references therein .the family of empirical phi - divergence test statistics , considered in this paper , contains the classical empirical log - likelihood ratio statistic as a particular case . in section [ sec2 ] ,the empirical phi - divergence test statistics are introduced and the corresponding asymptotic distributions are obtained .a simulation study is carried out in section [ sec4 ] .section [ sec3 ] is devoted to develop a numerical example . in section [ sec5 ] the previous results , devoted to univariate populations , are extended to -dimensional populations .for the hypothesis testing considered in ( [ 1 ] ) , in this section the family of empirical phi - divergence test statistics are introduced as a natural extension of the empirical log - likelihood ratio statistic given in ( [ c ] ) .we consider the -dimensional probability vectors and where , were defined in ( [ 2 ] ) and ( [ 3 ] ) , respectively , and in ( [ ass ] ) .let be the -dimensional vector obtained from with , replaced by the corresponding empirical maximum likelihood estimators , and by .the kullback - leibler divergence between the probability vectors and is given by where therefore , the relationship between and is based on ( [ 11 ] ) , in this paper the empirical phi - divergence test statistics for ( [ 1 ] ) are introduced for the first time .this family of empirical phi - divergence test statistics is obtained replacing the kullback - leibler divergence by a phi - divergence measure in ( [ 11 ] ) , i.e. , where with being any convex function such that at , , and at , and . for more detailssee cressie and pardo ( 2002 ) and pardo ( 2006 ) .therefore , ( [ 11bis ] ) can be rewritten as if is chosen in , we get the kullback - leibler divergence and coincides with the empirical log - likelihood ratio statistic given in ( [ 11 ] ) .let be the optimal estimator of under the assumption of having the known values of , , i.e. it is given by the shape and has minimum variance .it is well - known that similarly , an asymptotically optimal estimator of having unknown values of , , is given by where , are consistent estimators of , respectively .in the following lemma an important relationship is established , useful to get the asymptotic distribution of .[ lem]let the empirical likelihood estimator of . then, we have see appendix [ ap1 ] .[ th1]suppose that , and ( [ ass ] ). then, see appendix [ ap2 ] .[ r3]a -level confidence interval on can be constructed as the lower and upper bounds of the interval require a bisection search algorithm .this is a computationally challenging task , because for every selected grid point on , one needs to maximize the empirical phi - divergence over the nuisance parameter , , and there is no closed - form solution to the maximum point for any given .the computational difficulties under the standard two - sample empirical likelihood formulation are due to the fact that the involved lagrange multipliers , which are determined through the set of equations ( [ 3bis ] ) , have to be computed based on two separate samples with an added nuisance parameter .such difficulties can be avoided through an alternative formulation of the empirical likelihood function , for which computation procedures are virtually identical to those for one - sample of size empirical likelihood problems . through the transformations ( [ p ] ) and ( [ q ] )can be alternatively obtained as where the estimates of the lagrange multipliers are the solution in of [ r4]in the particular case that , the two samples might be understood as a random sample of size from a unique bidimensional population . in thissetting the two sample problem can be considered to be a particular case of balakrishnan et al .( 2015 ) .[ r5]fu et al . (2009 ) , yan ( 2010 ) and wu and yan ( 2012 ) pointed out that empirical log - likelihood ratio statistic , , given in ( [ 8 ] ) for testing ( [ 1 ] ) , does not perform well when the distribution associated to the samples are quite skewed or samples sizes are not large or sample sizes from each population are quite different . to overcome this problem fu et al .( 2009 ) considered the weighted empirical log - likelihood function defined by with , and obtained the weighted empirical likelihood ( wel ) estimator as well as the weighted empirical log - likelihood ratio statistic . in order to get the wel estimator ,it is necessary to maximize ( [ r1 ] ) subject to they obtained that the wel estimates of and are given by where and are the same transformations given in remark [ r3 ] with and the estimates of the lagrange multipliers are the solution in of now , if we define the probability vectors the weighted empirical log - likelihood ratio test presented in wu and yan ( 2012 ) can be written as the weighted empirical log - likelihood ratio test can be extended by defining the family of weighted empirical phi - divergence test statistics as where is the phi - divergence measure between the probability vectors and , i.e., taking into account where and based on theorem 2.2 . in wu and yan ( 2012 ) , we have that where is the second diagonal element of the matrix .the square of the classical -test statistic for two sample problems, has asymptotically distribution , the same as the empirical phi - divergence test statistics , according to theorem [ th1 ] . in order to compare the finite sample performance of the confidence interval ( ci ) of based on with respect to the ones based on as well asthe empirical log - likelihood ratio test - statistic given in ( [ c ] ) , we count on a subfamily of phi - divergence measures , the so - called power divergence measures , dependent of tuning parameter , i.e.{ll}\dfrac{2}{\gamma(\gamma+1)}\left ( { \displaystyle\sum\limits_{i=1}^{m } } ( m\widetilde{p}_{i})^{-\gamma}+{\displaystyle\sum\limits_{j=1}^{n } } ( n\widetilde{q}_{j})^{-\gamma}-n\right ) , & \gamma\in\mathbb{r } -\{0,-1\},\\ -2\left ( m\log m+n\log n+m{\displaystyle\sum\limits_{i=1}^{m } } \log\widetilde{p}_{i}+n{\displaystyle\sum\limits_{j=1}^{n } } \log\widetilde{q}_{j}\right ) , & \gamma=0,\\ 2\left ( m\log m+n\log n+m{\displaystyle\sum\limits_{i=1}^{m } } \widetilde{p}_{i}\log\widetilde{p}_{i}+n{\displaystyle\sum\limits_{j=1}^{n } } \widetilde{q}_{j}\log\widetilde{q}_{j}\right ) , & \gamma=-1 , \end{array } \right.\ ] ] where and can be obtained from ( [ p2])-([q2 ] ) .we analyzed five new test - statistics , the empirical power - divergence test statistics taking .the case of is not new , since the empirical log - likelihood ratio test - statistic is a member of the empirical power - divergence test statistics , i.e. . the ci of based on with confidence level is essentially the ci of -test statistic , . for , , as mentioned in remark [ r3 ] , since there is no explicit expression for the bisection method should be followed .the simulated coverage probabilities of the ci of based on were obtained with replications by with being the indicator function .the simulated expected width of the ci of based on were obtained with replications by the reason why two different values of were followed is twofold .on one hand calculating is much more time consuming than and on the other hand for the designed simulation experiment the replications needed to obtain a good precision is less for the expected width than for the coverage probability . the simulation experiment is designed in a similar manner as in wu and yan ( 2012 ) .the true distributions , unknown in practice , are generated from : i ) : : , , with , , ; ii ) : : , , with , , , .notice that in case ii ) since =e[y] ] m.1.6.60.0.64.2.65.2.65.6.63.66.8.3.43.6.45.46.47.6.46.46.8.3.29.5.28.8.30.29.7.29.7.28.4.8.16.1.18.20.20.1.19.2.18.4.6.03.8.04.04.05.9.05.8.04.3.5.90.7.91.92.92.92.8.91{ccccc}\hline \multicolumn{5}{c}{caseii ) : lognormal populations}\\\hline ] x.09.46.37.80.59.62.88.98.47.90.51.69.93.96.45.02.32.45.86.88.39.03.31.44.95.92.53.01.16.31.28.63.28.85.62.14.86.90.52.92.89.48.95.32.60 one of the assumptions of yu et al .( 2002 ) is that the field measurement and the lab measurement have common mean .the two types of measurements differ , however , in terms of precision .yu et al . (2002 ) also assumed that , was bivariate normal , which would not be required under our proposed empirical likelihood approach . in tsao and wu ( 2006 )this example was studied on the basis of the empirical log - likelihood ratio test .the cis of based on are summarized in table [ table3 ] . as in the simulation study ,the narrowest ci width is obtained with . in all the test - statisticsused to construct the cis is not contained , so the null hypothesis of equal means is rejected with significance level .2.8pt {cccc}\hline $ ] ci.122.703.581.121.712.591.121.718.598.123.724.602.124.726.601.133.725.592.101.712.611 and be two mutually independent random samples with common distribution function and respectively . assuming that and take values in and & = \boldsymbol{\mu}_{1},\text { } cov\left [ \boldsymbol{x}_{i}\right ] = \boldsymbol{\sigma}_{1}\text { , } i=1, ... ,m,\\ e\left [ \boldsymbol{y}_{j}\right ] & = \boldsymbol{\mu}_{2},\text { } cov\left [ \boldsymbol{y}_{j}\right ] = \boldsymbol{\sigma}_{2}\text { , } j=1, ... ,n,\end{aligned}\ ] ] with and , our interest is in testing where and known .the empirical likelihood under is and in the whole parameter space, with , and , .the empirical log - likelihood ratio statistic for testing ( [ 5.1 ] ) is given by based on lagrange multiplier methods , is obtained for where .the empirical maximum likelihood estimates , and of , and , under , can be obtained as the solution of{l}\frac{1}{m}{\textstyle\sum\limits_{i=1}^{m } } \frac{(\boldsymbol{x}_{i}-\boldsymbol{\mu})}{1+\boldsymbol{\lambda}_{1}^{t}(\boldsymbol{x}_{i}-\boldsymbol{\mu})}=\boldsymbol{0}_{k}\\ \frac{1}{n}{\textstyle\sum\limits_{j=1}^{n } } \frac{\left ( \boldsymbol{y}_{j}-\boldsymbol{\mu}-\boldsymbol{\delta}\right ) } { 1+\boldsymbol{\lambda}_{2}^{t}\left ( \boldsymbol{y}_{j}-\boldsymbol{\mu } -\boldsymbol{\delta}\right ) } = \boldsymbol{0}_{k}\\ m\boldsymbol{\lambda}_{1}^{t}+n\boldsymbol{\lambda}_{2}^{t}=\boldsymbol{0}_{k}\end{array } \right . .\] ] on the other hand is obtained for after some algebra , we obtain under some regularity conditions , it follows that where is the -th order quantile of the distribution .let be the estimate the probability vector where and are obtained from ( [ 5.2 ] ) and ( [ 5.3 ] ) replacing , and by , and , respectively . in this -dimensional case ,the kullback - leibler divergence between the probability vectors and is given by therefore , the relationship between and the kullback - leibler divergence is based on ( [ 5.6 ] ) the family of empirical phi - divergence test statistics are defined as with therefore the expression of is a result similar to the one given in lemma 1 for the -dimensional case is where and . finally , based in this resultit is possible to establish rnyi ( 1961 ) introduced the rnyi s divergence measure as an extension of the kullback - leibler divergence .unfortunately this divergence measure is not a member of the family of phi - divergence measures considered in this paper .menndez et al .( 1995 , 1997 ) introduced and studied the ( h , phi)-divergence measures in order to have a family of divergence measures in which the phi - divergence measures as well as the rnyi divergence measure are included .but not only the rnyi divergence measure is included in this new family but another important divergence measures not include in the family of phi - divergence measures are included . for more details about the different divergence measures included in the ( h , phi)-divergencesee for instance , pardo ( 2006 ) . based on the ( h , phi)-divergence measures between the probability vectors and , defined in ( [ 9 ] ) and ( [ 10 ] ) respectively , we can consider the following family of empirical ( h , phi)-divergence test statistics for the two - sample problem considered in ( [ 1 ] ) where is a differentiable increasing function from onto with and .if we consider in ( [ a ] ) , and we get i.e. , the empirical rnyi s divergence test statistics for testing ( [ 1 ] ) . for and , we get and it is clear that therefore in the same way can be established for the problem considered in ( [ 5.1 ] ) that where with defined in ( [ 5.7 ] ). * acknowledgement .* this research is partially supported by grants mtm2012 - 33740 from ministerio de economia y competitividad ( spain ) .99 adimari , g. ( 1995 ) .empirical likelihood confidence intervals for the difference between means _ statistica _ , * 55 * , 8794 baggerly , k. a. ( 1998 ) .empirical likelihood as a goodness - of - fit measure ._ biometrika _ , * 85 * , 535547 .baklizi , a. and kibria , b.m .g. ( 2009 ) .one and two sample confidence intervals for estimating the mean of skewed populations : an empirical comparative study ._ journal of applied statistics , _ * 36 , * 6 , 601 - 609 .balakrishnan , n , martin , n. and pardo , l. ( 2015 ) .empirical phi - divergence test statistics for testing simple and composite null hypotheses ._ statistics _ , * 49 * , 951977 .basu , a. , mandal , a. martin , n. and pardo , l. ( 2015 ) .robust tests for the equality of two normal means based on the density power divergence ._ metrika _ , * 78 * , 611634 .bhattacharyya , a. ( 1943 ) . on a measure of divergence between two statistical populations defined by their probability distributions ._ bulletin of the calcutta mathematical society _, 35 , 99109 .broniatowski , m. and keziou , a. ( 2012 ) .divergences and duality for estimating and test under moment condition models . _ journal of statistical planning and inference , _ * 142 * , 25542573 .cressie , n. and pardo , l. ( 2002 ) .phi - divergence statisitcs . in : elshaarawi ,plegorich , w.w . editors ._ encyclopedia of environmetrics* 13*. pp : 15511555 , john wiley and sons , new york .cressie , n. and read , t. r. c. ( 1984 ) .multinomial goodness - of - fit tests ._ journal of the royal statistical society , _ series b , * 46 * , 440464 .felipe , a. martn , n. , miranda , p. and pardo , l. ( 2015 ) .empirical phi - divergence test statistics for testing simple null hypotheses based on exponentially tilted empirical likelihood estimators ._ arxiv _ preprint arxiv:1503.00994 fu , y. , wang , x. and wu , c. ( 2008 ) . weighted empirical likelihood inference for multiple samples .. _journal of statistical planning and inference , _ * 139 * , 14621473 .jing , b. y. ( 1995 ) .two - sample empirical likelihood method . _ statistics and probability letters , _ * 24 * , 315319__. _ _ liu , y. , c. zou , and zhang , r. ( 2008 ) . _ statistics and probability letters , _ * 78 * , 548556__. _ _ menndez , m. l. , morales , d. , pardo , l. and salicr , m. ( 1995 ) .asymptotic behavior and statistical applications of divergence measures in multinomial populations : a unified study ._ statistical papers , _ * 36 , * 129 .menndez , m. l. , pardo , j. a. , pardo , l. and pardo , m. c. ( 1997 ) .asymptotic approximations for the distributions of the -divergence goodness - of - fit statistics : applications to rnyi s statistic ._ kybernetes , _ * 26 , * 442452. owen , a. b. ( 1988 ) .empirical likelihood ratio confidence interval for a single functional ._ biometrika _ , * 75 * , 308313 .owen , a. b. ( 1990 ) .empirical likelihood confidence regions . _the annals of statistics _ , * 18 * , 90120 .owen , a. b. ( 1991 ) .empirical likelihood for linear models ._ the annals of statistics _ , * 19 * , 17251747 .owen , a. b. ( 2001 ) ._ empirical likelihood , _ chapman and hall / crc . pardo , l. ( 2006 ) ._ statistical inference based on divergence measures_. chapman & hall/ crc press , boca raton , florida .qin , j. ( 1994 ) .semi - parametric likelihood ratio confidence intervals for the difference of two sample means . _ annals of the institute of statitical mathematics , _ * 46 * , 117126 .qin , j. ( 1998 ) .inferences for case - control and semiparametric two - sample densisty ratio models ._ biometrika , _ * 85 * , 619630 .qin , j. and lawless , j. ( 1994 ) .empirical likelihood and general estimating equations ._ the annals of statistics _ , * 22 * , 300325 .qin , y. and zhao , l. ( 2000 ) .empirical likelihood ratio intervals for various differences of two populations . _chinese systems sci . math ., _ * 13 , * 2330 .rnyi , a. ( 1961 ) . on measures of entropy and information ._ proceedings of the fourth berkeley symposium on mathematical statistics and probability , _ * 1 * , 547561__. _ _ sharma , b. d. and mittal , d. p. ( 1997 ) .new non - additive measures of relative information ._ journal of combinatorics , information & systems science , _ * * 2 * * _ _ , _ _ 122133__. _ _ tsao , m. and wu , c. ( 2006 ) .empirical likelihood inference for a common mean in the presence of heteroscedasticity . _ the canadian journal of statistics _ , * 34 * , 1 , 4559 .yan , y. ( 2010 ) .empirical likelihood inference for two - sample problems .unpublished master s thesis , department of statisticsd and actuarial science , university of waterloo , canada .yu , p.l.h ., su , y. and sinha , b. ( 2002 ) .estimation of the common mean of a bivariate normal population ._ annals of the institute of statistical mathematics _ , * 54 * , 861878 .wu , c. and yan , y. ( 2012 ) empirical likelihood inference for two - sample probllems ._ statistics and its inference , _ * 5 * , 345354 .zhang , b. ( 2000 ) .estimating the treatment effect in the two - sample problem with auxiliary information . _ nonparametric statistics , _ * 12 , * 377389 .first we are going to establish if we denote we have .a taylor expansion gives on the other hand then but , because ( see page 220 in owen ( 2001 ) ) , and applying the strong law of large numbers. , because by lemma 11.3 in page 218 in owen ( 2001). . because applying lemma 11.2 in page 218 in owen ( 2001). .therefore in a similar way we can get therefore , applying ( [ 14 ] ) , and from ( [ muhat ] ) we have therefore, now we have, and where is such that ( [ ass ] ) . hence from which is obtained and now the result follows .
empirical phi - divergence test - statistics have demostrated to be a useful technique for the simple null hypothesis to improve the finite sample behaviour of the classical likelihood ratio test - statistic , as well as for model misspecification problems , in both cases for the one population problem . this paper introduces this methodology for two sample problems . a simulation study illustrates situations in which the new test - statistics become a competitive tool with respect to the classical z - test and the likelihood ratio test - statistic . * ams 2001 subject classification : * 62f03 , 62f25 . * keywords and phrases : * empirical likelihood , empirical phi - divergence test statistics , phi - divergence measures , power function .
non - conforming finite element methods ( fems ) play an important role in computational mechanics .they allow the discretization of partial differential equations ( pdes ) for incompressible fluid flows , for almost incompressible materials in linear elasticity , and for low polynomial degrees in the ansatz spaces for higher - order problems .the projection property of the interpolation operator of the non - conforming fem , also named after crouzeix and raviart , states that the projection of onto the space of piecewise constant functions equals the space of piecewise gradients of the non - conforming interpolation of functions in the non - conforming finite element space .this property is the basis for the proof of the discrete inf - sup condition for the stokes equations as well as for the analysis of adaptive algorithms .many possible generalizations of the non - conforming fem to higher polynomial degrees have been proposed .all those generalizations are either based on a modification of the classical concept of degrees of freedom , are restricted to odd polynomial degrees , or employ an enrichment by additional bubble - functions .however , none of those generalizations possesses a corresponding projection property of the interpolation operator for higher moments ( see remark [ r : pmpnoteqcr ] below ) .this paper introduces a novel formulation of the poisson equation ( in below ) based on the helmholtz decomposition along with its discretization of arbitrary ( globally fixed ) polynomial degree .this new discretization approximates directly the gradient of the solution , which is often the quantity of interest , instead of the solution itself .for the lowest - order polynomial degree , the discrete helmholtz decomposition of proves equivalence of the novel discretization with the known non - conforming crouzeix - raviart fem and therefore they appear in a natural hierarchy . in the context of the novel ( mixed ) formulation ,these discretizations turn out to be conforming .although the complexity of the new discretization itself is competitive with that of a standard fem , the method requires the pre - computation of some function such that its divergence equals the right - hand side .if this is not computable analytically , this results in an additional integration ( see also remark [ r : computationphi ] below ) . however , this paper focuses on the poisson problem as a model problem to introduce the idea of the new approach and to give a broad impression over possible extensions as quadrilateral discretizations ( including a discrete helmholtz decomposition on quadrilateral meshes for the non - conforming rannacher - turek fem as a further highlight of this paper ) , the generalization to three dimensions , or inhomogeneous mixed boundary conditions .the advantages of the new approach in some applications will be the topic of forthcoming papers .the presence of singularities for non - convex domains usually yields the same sub - optimal convergence rate for any polynomial degree .this motivates adaptive mesh - generation strategies , which recover the optimal convergence rates .this paper presents an adaptive algorithm and proves its optimal convergence .the proof essentially follows ideas from the context of the non - conforming crouzeix - raviart fem .this illustrates that the novel discretization generalizes it in a natural way .since the efficient and reliable error estimator involves a data approximation term without a multiplicative power of the mesh - size , the adaptive algorithm is based on separate marking .a possible drawback of the new fems is that the gradient of the solution is approximated , but not the solution itself .this excludes obvious generalizations to partial differential equations where appears in lower - order terms .the remaining parts of this paper are organized as follows .section [ s : notation ] defines some notation .section [ s : pmpformulation ] introduces the novel formulation based on the helmholtz decomposition and its discretization together with an a priori error estimate .the equivalence with the non - conforming fem for the lowest - order case is proved in subsection [ ss : pmpcr ] .section [ s : pmpremarks ] summarizes some generalizations .section [ s : pmpmedius ] is devoted to a medius analysis of the fem , which uses a posteriori techniques to derive a priori error estimates .section [ s : pmpafem ] proves quasi - optimality of an adaptive algorithm , while section [ s : pmp3d ] outlines the generalization to 3d .section [ s : pmpnumerics ] concludes this paper with numerical experiments .throughout this paper is a simply connected , bounded , polygonal lipschitz domain .standard notation on lebesgue and sobolev spaces and their norms is employed with scalar product . given a hilbert space ,let resp . denote the space of functions with values in whose components are in resp . and let denote the subset of of functions with vanishing integral mean .the space of functions whose weak divergence exists and is in is denoted with .the space of infinitely differentiable functions reads and the subspace of functions with compact support in is denoted with .the piecewise action of differential operators is denoted with a subscript .the formula represents an inequality for some mesh - size independent , positive generic constant ; abbreviates . by convention ,all generic constants do neither depend on the mesh - size nor on the level of a triangulation but may depend on the fixed coarse triangulation and its interior angles .the curl operator in two dimensions is defined by for sufficiently smooth . a shape - regular triangulation of a bounded , polygonal, open lipschitz domain is a set of closed triangles such that and any two distinct triangles are either disjoint or share exactly one common edge or one vertex .let denote the edges of a triangle and the set of edges in .any edge is associated with a fixed orientation of the unit normal on ( and denotes the unit tangent on ) . on the boundary, is the outer unit normal of , while for interior edges , the orientation is fixed through the choice of the triangles and with and is then the outer normal of on . in this situation , :=v\vert_{t_+}-v\vert_{t_-} ] . for and , let denote the set of piecewise polynomials and .given a subspace , let denote the projection onto and let abbreviate .given a triangle , let denote the square root of the area of and let denote the piecewise constant mesh - size with for all . for a set of triangles , let abbreviate given an initial triangulation , an admissible triangulation is a regular triangulation which can be created from by newest - vertex bisection .the set of admissible triangulations is denoted by .this section introduces the new formulation based on the helmholtz decomposition in subsection [ ss : pmpformulation ] and its discretization in subsection [ ss : pmpdiscretisation ] .subsection [ ss : pmpcr ] discusses the equivalence with the non - conforming crouzeix - raviart fem . given the simply connected , bounded , polygonal lipschitz domain and , the poisson model problem seeks with the novel weak formulation is based on the classical helmholtz decomposition for any simply connected domain , where the sum is orthogonal with respect to the scalar product .note that for , the definition of the curl implies define and and let satisfy .the novel weak formulation of the poisson problem seeks with this formulation is the point of departure for the numerical approximation of in subsection [ ss : pmpdiscretisation ] .[ r : existence ] since , any satisfies the inf - sup condition this and brezzi s splitting lemma imply the unique existence of a solution to .the orthogonality of and implies the second equation of and the helmholtz decomposition imply the existence of with .since satisfies , the orthogonality in implies that any satisfies and , hence , solves .[ r : boundary ] let with closed , , and each connectivity component of has positive length .assume that the triangulation resolves .let denote the space of generalized normal traces of functions and let and in the sense that there holds on in the sense of distributions for some . consider the mixed boundary value problem in with on and on .let denote the subspace of of functions with vanishing trace on . for , define .define the helmholtz decomposition for mixed boundary conditions ( * ? ? ?* corollary 3.1 ) then leads to the following formulation .let with additionally fulfil the boundary condition and seek with since , the equivalence follows as in subsection [ ss : pmpformulation ] and with if is a multiply connected polygonal bounded lipschitz domain and , such that all parts of lie on the outer boundary of ( on the unbounded connectivity component of ) , then the helmholtz decomposition of remark [ r : boundary ] still holds and a discretization as above is then immediate .however , if the dirichlet boundary also covers parts of interior boundary , that helmholtz decomposition does no longer hold : there exist harmonic functions which are constant on different parts of and , hence , are neither in , nor in .[ r : computationphi ] the computation of appears as a practical difficulty because needs to be defined through an integration of . if has some simple structure , e.g. , is polynomial , this can be done manually , while for more complicated , a numerical integration of has to be employed , but is possible in parallel .let be a regular triangulation of and and define the discretization of seeks and with [ e : pmpdp ] since there are no continuity conditions on and since , the first equation is fulfilled in a strong form , i.e. , in contrast to classical finite element methods , the approximation of is a gradient only in a _ discrete orthogonal _ sense , namely . for , subsection [ ss : pmpcr ] below proves that this _ discrete orthogonal gradient property _ is equivalent to being a non - conforming gradient of a crouzeix - raviart finite element function .the main motivation of the novel formulation is the generalization of this scheme to any polynomial degree .[ r : dexistence ] since , the discrete inf - sup condition is fulfilled .this and brezzi s splitting lemma imply the unique existence of a solution to .the equality in follows from the orthogonality of and .the conformity of the method and the inf - sup conditions from remarks [ r : existence ] and [ r : dexistence ] imply the following best - approximation result .[ t : pmpbestapprox ] the solution to and the discrete solution of satisfy a direct analysis of the bilinear form defined by for all and all reveals that the inf - sup constant of equals and , hence , the constant hidden in in is .[ r : globalinfsup ] the best - approximation of theorem [ t : pmpbestapprox ] contains the term on the right - hand side , which depends on the choice of .this seems to be worse than the best - approximation results for standard fems , which do not involve such a term .however , if is chosen smooth enough , then has at least the same regularity as , and therefore the convergence rate is not diminished . on the other hand , the approximation space for does not have any continuity restriction and so the first approximation term is superior to the best - approximation of a standard fem , where is approximated with gradients of finite element functions .however , ( * ? ? ? * theorem 3.2 ) and the comparison results of prove equivalence of and the best - approximation with gradients of a standard fem up to some multiplicative constant .the following lemma proves a projection property .this means that for any , the best - approximation of in is a _ discrete orthogonal gradient _ in the sense that it is orthogonal to and so belongs to the set of _ discrete orthogonal gradients _ defined by the projection property is the key ingredient in the optimality analysis of section [ s : pmpafem ] .[ l : pmpintegralmean ] it holds that .moreover , if is an admissible refinement of , then .let .since and , the orthogonality in the helmholtz decomposition implies for any that this proves . for the converse direction , let and let be a solution ( possibly not unique ) to the orthogonality of to implies the existence of such that .therefore , and , hence , is a piecewise polynomial of degree and therefore .but since , it holds that and , hence , . this proves and , therefore , .a similar proof applies in the discrete case and proves .problem is equivalent to the problem : find such that therefore , the system matrix is ( in 2d ) the same than that of a standard fem ( up to degrees of freedom on the boundary ) .the non - conforming crouzeix - raviart finite element space reads since ( if the triangulation consists of more than one triangle ) , the weak gradient of a function does not exist in general .however , the piecewise version defined by for all exists .the non - conforming discretization of the poisson problem seeks with the lowest - order space of raviart - thomas finite element functions reads the raviart - thomas functions have the property that the integration by parts formula holds for functions in as well as for functions in .the following proposition proves the equivalence of the non - conforming discretization and the discretization for .note that the discretization is a non - conforming discretization , while the discretization is a conforming one .[ p : pmpequivcr ] let be piecewise constant and let satisfy .then the discrete solution to for and the gradient of the discrete solution to coincide , the crucial point is the discrete helmholtz decomposition since is orthogonal to , this implies for some .let for some .then is orthogonal to and a piecewise integration by parts and imply hence , solves . the projection property from lemma [ l : pmpintegralmean ] generalizes the famous integral mean property for all of the non - conforming interpolation operator .[ r : pmpnoteqcr ] for higher polynomial degrees , the discretization is not equivalent to known non - conforming schemes , in the sense that for those non - conforming finite element spaces .this follows from for non - conforming fems with enrichment .a dimension argument shows for the non - conforming fems of without enrichment and therefore .moreover , this proves that the generalization of the projection property to higher moments from lemma [ l : pmpintegralmean ] can not hold for those finite element spaces , in contrast to the discretization .subsection [ ss : pmprectangles ] generalizes the novel fem to quadrilateral meshes and proves a new discrete helmholtz decomposition for the rotated non - conforming rannacher - turek fem .subsection [ ss : pmpraviartthomas ] discusses a discretization with raviart - thomas functions . for this subsection ,consider a regular partition of in quadrilaterals .define for the reference rectangle ^ 2 ] and set ) , d , e\in q_{k-1}(\widehat{t})\\ \text { such that } \forall ( \widehat{x},\widehat{y})\in \widehat{t}\\ \tau_h(\widehat{x},\widehat{y } ) = a\begin{pmatrix } -\widehat{x}^k \widehat{y}^{k-1}\\ \widehat{x}^{k-1 } \widehat{y}^k \end{pmatrix } + \begin{pmatrix } \widehat{x}^k b(\widehat{y } ) + d(\widehat{x},\widehat{y})\\ \widehat{y}^k c(\widehat{x})+e(\widehat{x},\widehat{y } ) \end{pmatrix } \end{array } \right\}\right.,\\ x_k^{\mathrm{rect}}({\mathcal{t}})&:=\left\{\tau_h\in l^2(\omega;{\mathbb{r}}^2 ) \left\vert \begin{array}{l } \forall t\in{\mathcal{t}}\ , \exists \rho_t\in x_k^{\mathrm{rect}}(\widehat{t } ) \text { such that } \\( \tau_h\circ \psi_t)\vert_{\widehat{t}}\\ \qquad = \begin{pmatrix } 0 & 1 \\ -1 & 0 \end{pmatrix } d(\psi_t^{-1})^\top \circ\psi_t \begin{pmatrix } 0 & -1 \\ 1 & 0 \end{pmatrix } \rho_t \end{array } \right\}\right .. \end{aligned}\ ] ] then a discretization with respect to the quadrilateral partition seeks and with let , i.e. , .a direct calculation reveals for all let with and ) ] vanishes , this proves the orthogonality .let .a computation reveals for all that there exist and such that for , reads since all are squares , and commute , and , hence , .thus , .the dimension of equals and the dimension of equals , while the dimension of equals .this and lemma [ l : pmpeulerrect ] prove the assertion .the best - approximation ( ii ) from above proves quasi - optimal convergence even for arbitrary quadrilaterals .standard interpolation error estimates for and for lead to first - order convergence rates of for sufficiently smooth solutions .this should be contrasted with , where quasi - optimal convergence is only obtained for a modification of where is defined in terms of local coordinates .this subsection shows that the classical mixed raviart - thomas fem can be regarded as a particular choice of the ansatz spaces in the new mixed scheme .let denote a regular triangulation of in triangles .define the space of raviart - thomas functions and then the following problem is a discretization of : seek with since and for all , it follows .this and the conformity of the method guarantee as in section [ s : pmpformulation ] and in subsection [ ss : pmprectangles ] the unique existence of solutions , a best - approximation result , and the projection property the discrete helmholtz decomposition of proves with the operator defined for all by this decomposition yields the equivalence of with the problem : seek with this is the classical raviart - thomas discretization with replaced by .assume now that the right - hand side is a raviart - thomas function .since by definition with from subsection [ ss : pmpdiscretisation ] and since is the solution of it holds with from . since and , it follows for , the equivalence with the crouzeix - raviart fem then proves the identity which is also known as marini identity .the medius analysis of proves for the discrete solution to the best - approximation result the following theorem proves a generalization for the discretization for the lowest order case .[ t : pmpmedius ] let be the solution to and be the solution to. then the following best - approximation result holds if is a lowest - order raviart - thomas function , then it allows for an integration by parts formula also with crouzeix - raviart functions ( see subsection [ ss : pmpcr ] ) .therefore , the third term on the right - hand side of vanishes .this and the equivalence with the non - conforming fem of crouzeix and raviart from subsection [ ss : pmpcr ] reveal the best - approximation result . the remaining part of this section is devoted to the proof of theorem [ t : pmpmedius ] .the following lemma from is the key ingredient of this proof .recall the definition of from subsection [ ss : pmpcr ] .[ l : pmpcompanion ] for any there exists with the following properties define .the projection property of lemma [ l : pmpintegralmean ] implies that and the discrete helmholtz decomposition guarantees the existence of with .let denote the companion of from lemma [ l : pmpcompanion ] .then the properties ( i ) and ( iii ) from lemma [ l : pmpcompanion ] yield for the first term on the right - hand side the problems and lead for the second and third term on the right - hand side of to since , it follows properties ( ii ) and ( iii ) of lemma [ l : pmpcompanion ] prove the combination with and and a cauchy inequality yield this and prove the assertion .[ r : pmpcompanionhigherpolynom ] for , remark [ r : pmpnoteqcr ] implies that an analogue of lemma [ l : pmpcompanion ] can not be proved in the same way .this section defines an adaptive algorithm based on separate marking and proves its quasi - optimal convergence .let denote some initial shape - regular triangulation of , such that each triangle is equipped with a refinement edge .a proper choice of these refinement edges guarantees an overhead control .let denote the subset of of all admissible triangulations with at most triangles .the adaptive algorithm involves the overlay of two admissible triangulations , which reads given a triangulation , define for all the local error estimator contributions by \cdot \tau_e \|_{l^2(e)}^2,\\ \mu^2(t)&:= \|\varphi-\pi_k\varphi\|_{l^2(t)}^2 \end{aligned}\ ] ] and the global error estimators by the adaptive algorithm is driven by these two error estimators and runs the following loop .[ a : pmpafem ] initial triangulation , parameters , , .compute solution of with respect to .compute local contributions of the error estimators and .the drfler marking chooses a minimal subset such that .generate the smallest admissible refinement of in which at least all triangles in are refined . compute a triangulation with .generate the overlay of and .sequence of triangulations , discrete solutions and error estimators and . the residual - based error estimator involves the term without a multiplicative positive power of the mesh - size .therefore , the optimality of an adaptive algorithm based on collective marking ( that is and replaced by in algorithm [ a : pmpafem ] ) does not follow from the abstract framework from . the reduction property ( axiom ( a2 ) from ) ,is not fulfilled .algorithm [ a : pmpafem ] considered here is based on separate marking . in this context , the optimality of the adaptive algorithm ( see theorem [ t : pmpoptimalafem ] ) can be proved with a reduction property that only considers .[ r : pmpb1approx ] the step _ mark _ in the second case ( ) can be realized by the algorithm ` approx ` from , i.e. , the thresholding second algorithm followed by a completion algorithm . for this algorithm , the assumption ( b1 ) optimal data approximation , which is assumed to hold in the following , follows from the axioms ( b2 ) and ( sa ) from subsection [ ss : pmpaxiomb ] . for a discussion about other algorithms that realize _mark _ in the second case , see . for and [ r : pmplocalapproxclass ] since is assumed to be a lipschitz domain , all patches in an admissible triangulation are edge - connected , i.e. , for all vertices and triangles with , there exists and with , , and for all . under this assumption , (* theorem 3.2 ) shows hence , in the following , we assume that the following assumption ( b1 ) holds for the algorithm used in the step _ mark _ for ( see remark [ r : pmpb1approx ] ) .[ as : pmpb1 ] assume that is finite . given a tolerance , the algorithm used in _mark _ in the second case ( ) in algorithm [ a : pmpafem ] computes with the following theorem states optimal convergence rates of algorithm [ a : pmpafem ] .[ t : pmpoptimalafem ] for and sufficiently small and , algorithm [ a : pmpafem ] computes sequences of triangulations and discrete solutions for the right - hand side of optimal rate of convergence in the sense that the proof follows from the abstract framework of , which employs the bounded overhead of the newest - vertex bisection , under the assumptions ( a1)(a4 ) and ( b2 ) and ( sa ) which are proved in subsections [ ss : pmpstabilityreduction][ss : pmpaxiomb ] .the following two theorems follow from the structure of .let be an admissible refinement of and .let and be the respective discrete solutions to .then , this follows with triangle inequalities , inverse inequalities and the trace inequality from as in ( * ? ? ?* proposition 3.3 ) .let be an admissible refinement of . then there exists and such that this follows with a triangle inequality and the mesh - size reduction property for all as in ( * ? ? ?* corollary 3.4 ) .the following theorem proves discrete reliability , i.e. , the difference between two discrete solutions is bounded by the error estimators on refined triangles only .[ t : pmpdrel ] let be an admissible refinement of with respective discrete solutions and .then , recall the definition of from . since , there exist and with .since , the orthogonality furthermore implies that the discrete error can be split as the projection property , lemma [ l : pmpintegralmean ] , proves .hence , problem implies that the first term of the right - hand side equals for any triangle , it holds .therefore , since is a refinement of , it holds let denote the quasi interpolant from of which satisfies the approximation and stability properties and for all edges . since and , an integration by parts leads to ( r_{{\mathcal{t}}_\star}-r_{\mathcal{t}})\,ds.\end{aligned}\ ] ] for a triangle , any edge satisfies .hence , for all .this , the cauchy inequality and the approximation and stability properties of the quasi interpolant lead to since for all edges , the approximation and stability properties of the quasi interpolant and the trace inequality lead to ( r_{{\mathcal{t}}_\star}-r_{\mathcal{t}})\,ds\\ & \qquad\qquad\qquad\qquad \lesssim \sqrt{\sum_{e\in{\mathcal{e}}({\mathcal{t}})\setminus{\mathcal{e}}({\mathcal{t}}_\star ) } h_t \|[p_{\mathcal{t}}\cdot\tau_e]_e\|_{l^2(e)}^2 } \left\|{\operatorname{curl}}r_{{\mathcal{t}}_\star}\right\|_{l^2(\omega)}. \end{aligned}\ ] ] the combination of the previous displayed inequalities yields since and , the triangle inequality yields the assertion .the discrete reliability of theorem [ t : pmpdrel ] together with the convergence of the discretization proves reliability of the residual - based error estimator .this is summarized in the following proposition .[ p : effrelresidualest ] let and be the solutions to and for some . there exist constants with the a priori error estimate from theorem [ t : pmpbestapprox ] implies the convergence of the discrete solutions .this and theorem [ t : pmpdrel ] proves the reliability .the efficiency follows from the standard bubble function technique .the following theorem proves quasi - orthogonality of the discretization .[ t : pmpquasiorthogonality ] let be some sequence of triangulations with discrete solutions to .let .then , the projection property , lemma [ l : pmpintegralmean ] , proves with from .hence , problem leads to the subtraction of these two equations and an index shift leads , for any with , to since is -orthogonal to , a cauchy and a weighted young inequality imply the orthogonality for all proves the definition of yields the combination of and leads to the combination of the arguments of proves this , the discrete problem , and the discrete reliability from theorem [ t : pmpdrel ] lead to this and a further application of theorem [ t : pmpdrel ] leads to the combination of with implies the young inequality , the triangle inequality , and imply since is arbitrary , the combination with , , and yields the assertion .the following theorem together with assumption [ as : pmpb1 ] form the axiom ( b ) from .any admissible refinement of satisfies this follows directly from the definition of .this section is devoted to the generalization to 3d .subsection [ ss : pmp3ddef ] defines the novel discretization and comments on basic properties , while subsection [ ss : pmp3dafem ] is devoted to optimal convergence rates for the adaptive algorithm . for this section ,let be a simply connected , bounded , polygonal lipschitz domain in .for the sake of simplicity , we also assume that is connected ( i.e. , is contractible ) .the curl operator acts on a sufficiently smooth vector field as with the cross product or vector product .let denote the space of all with for the weak , i.e. , in contrast to the two - dimensional case , .the helmholtz decomposition in 3d reads and the sum is orthogonal .it is a consequence of the identity in the de rham complex .let with .then the poisson problem is equivalent to the problem : find with in contrast to the two - dimensional case , the operator has a non - trivial kernel .classical results characterize this kernel as . to enforce uniqueness ,we can reformulate as follows .seek with note that implies .standard finite element spaces to discretize in 3d are the ndlec finite element spaces ( also called edge elements ) which are known from the context of maxwell s equations .let be a regular triangulation of in tetrahedra in the sense of .the spaces of first kind ndlec finite elements read let . since , a generalization of to 3d seeks with the discrete exact sequence implies that the elements in with vanishing curl are exactly the gradients of functions in .therefore , the uniqueness in can be obtained in the following formulation .seek with note that is the kernel of and so implies .this variable is introduced in order that has the form of a standard mixed system . the discrete helmholtz decomposition of ( * ? ? ?* lemma 5.4 ) proves that for the lowest order discretization , is a crouzeix - raviart function and so can be seen as a generalization of the non - conforming crouzeix - raviart fem to higher polynomial degrees .the inf - sup condition follows from and .this and the conformity of the method lead to the best - approximation result since , this is equivalent to the following proposition states a projection property similar to lemma [ l : pmpintegralmean ] for the two - dimensional case . to this end , define since is the kernel of , it holds this implies [ l : pmp3dintegralmean ] let with for all ( that means that is a gradient of a function ). then . if is an admissible refinement of , then and , the assertion follows with the arguments in the proof of lemma [ l : pmpintegralmean ] .this subsection outlines the proof of optimal convergence rates for algorithm [ a : pmpafem ] in 3d driven by the error estimators and defined by the local contributions \|_{l^2(e)}^2,\\\mu^2(t)&:=\|\varphi-\pi_{x_h({\mathcal{t}})}\varphi\|_{l^2(t)}^2\end{aligned}\ ] ] and . here, denotes the faces of a tetrahedron and denotes the piecewise constant mesh - size function defined by .the refinement of triangulations in algorithm [ a : pmpafem ] is done by newest - vertex bisection .let denote the space of admissible triangulations with at most tetrahedra more than .as in subsection [ ss : pmpafemdef ] , define the seminorm assume that assumption [ as : pmpb1 ] holds .the following theorem states optimal convergence rates for algorithm [ a : pmpafem ] for 3d .let .for and sufficiently small and , algorithm [ a : pmpafem ] computes sequences of triangulations and discrete solutions for the right - hand side of optimal rate of convergence in the sense that the proof follows as in section [ s : pmpafem ] from ( a1)(a4 ) and ( b ) from and the efficiency of and .the proof of efficiency follows with the standard bubble - function technique .the proofs of the axioms ( a1)(a4 ) and ( b ) are outlined in the following .the axioms ( a1 ) stability and ( a2 ) reduction follow as in subsection [ ss : pmpstabilityreduction ] with triangle inequalities , inverse inequalities , a trace inequality similar to , and the mesh - size reduction property for all .however , for ( a3 ) quasi - orthogonality and ( a4 ) discrete reliability , the interpolation operator of can not be applied directly to as done in the proof of theorem [ t : pmpdrel ] , because .this can be overcome by a quasi - interpolation based on a quasi - interpolation operator from and a projection operator from .its properties are summarized in the following theorem .[ t : pmp3dquasiinterpol ] let be an admissible refinement of and define .let .then there exists , , and with this follows as in the proof of ( * ? ? ?* theorem 5.3 ) and with the ellipticity on the discrete kernel from ( * ? ? ? * proposition 4.6 ) .the differences between the proof of ( a4 ) discrete reliability and the proof of theorem [ t : pmpdrel ] are outlined in the following .let and denote the discrete solutions to .as in the proof of theorem [ t : pmpdrel ] , let and such that .the first term of the right - hand side of is estimated as in the proof of theorem [ t : pmpdrel ] , while for the second term , the quasi - interpolant of with for and from theorem [ t : pmp3dquasiinterpol ] is employed .this yields a piecewise integration by parts and the arguments of the proof of theorem [ t : pmpdrel ] conclude the proof .the crucial point is that is smooth enough to allow for a trace inequality .the proof of ( a3 ) quasi - orthogonality follows as in the proof of theorem [ t : pmpquasiorthogonality ] with the projection property of lemma [ l : pmp3dintegralmean ] and the following modifications in . since ( in the analogue notation as in ) , there exists with .theorem [ t : pmp3dquasiinterpol ] guarantees the existence of , and with .this implies in that since is smooth enough , a piecewise integration by parts and the arguments of the proof of theorem [ t : pmpdrel ] then prove this and the arguments of theorem [ t : pmpquasiorthogonality ] eventually prove the quasi - orthogonality .this section presents numerical experiments for the discretization for . subsections [ ss : pmpnumlshapeddbex][ss : pmpnumsingalpha ] compute the discrete solutions on sequences of uniformly red - refined triangulations ( see figure [ f : pmpredrefinement ] for a red - refined triangle ) as well as on sequences of triangulations created by the adaptive algorithm [ a : pmpafem ] with bulk parameter and and . the convergence history plots are logarithmically scaled and display the error against the number of degrees of freedom ( ndof ) of the linear system resulting from the schur complement .the underlying l - shaped domain \times [ -1,0]) ] is defined for boundary edges , , with adjacent triangle by \cdot\tau_e:= p_h\vert_{t_+}\cdot\tau_e - \nabla u_d\cdot\tau_e.\end{aligned}\ ] ] the error estimator is then defined by . the local data error estimator contributions read the global error estimator is defined by .the errors and error estimators for the approximation of for are plotted in figure [ f : pmpnumlshapeddbex ] against the number of degrees of freedom .the errors and error estimators show an equivalent behaviour with an overestimation of approximately 10 .uniform refinement leads to a suboptimal convergence rate of for .the adaptive refinement reproduces the optimal convergence rates of for .figure [ f : pmpnumlshapeddbextriang ] depicts three meshes created by the adaptive algorithm for , , and with approximately 1000 degrees of freedom .the singularity at the re - entrant corner leads to a strong refinement towards , while the refinement for also reflects the behaviour of the right - hand side , i.e. , one also observes a moderate refinement on the circular ring .the marking with respect to the data - approximation ( in algorithm [ a : pmpafem ] ) is applied at the first 7 ( resp . 5 and 10 ) levels for ( resp . and ) and then at approximately every third level .errors and error estimators from subsection [ ss : pmpnumlshapeddbex ] . ]adaptively refined triangulations for the experiment from subsection [ ss : pmpnumlshapeddbex].,title="fig:",scaledwidth=32.0% ] adaptively refined triangulations for the experiment from subsection [ ss : pmpnumlshapeddbex].,title="fig:",scaledwidth=32.0% ] adaptively refined triangulations for the experiment from subsection [ ss : pmpnumlshapeddbex].,title="fig:",scaledwidth=32.0% ] for and define with .the error estimators are plotted against the degrees of freedom in figure [ f : pmpnumlshaped ] for .the error estimators show for a suboptimal convergence rate of for uniform refinement .the adaptive algorithm [ a : pmpafem ] recovers the optimal convergence rate of .adaptively refined meshes are depicted in figure [ f : pmpnumlshapedtriang ] for approximately 1000 degrees of freedom .the strong refinement towards the singularity at the re - entrant corner is clearly visible .the smoothness of implies that the data - approximation error estimator vanishes on all triangulations for . for , does not vanish , nevertheless , since for all , only the drfler marking is applied .error estimators for the experiment from subsection [ ss : pmpnumlshaped ] . ]adaptively refined triangulations for the experiment from subsection [ ss : pmpnumlshaped].,title="fig:",scaledwidth=32.0% ] adaptively refined triangulations for the experiment from subsection [ ss : pmpnumlshaped].,title="fig:",scaledwidth=32.0% ] adaptively refined triangulations for the experiment from subsection [ ss : pmpnumlshaped].,title="fig:",scaledwidth=32.0% ] this subsection is devoted to a numerical investigation of the dependence of the error on the regularity of .the exact smooth solution of reads .define with defined by . then with .the errors and error estimators are plotted in figure [ f : pmpnumlshapedsingalpha ] against the number of degrees of freedom .the convergence rate on uniform red - refined meshes for is and , hence , the convergence rate seems to depend on the regularity of .the errors and error estimators show the same convergence rate .figure [ f : pmpnumlshapedsingalpha_p1 ] focuses on the results for and uniform mesh - refinement .the error and the error estimator show a convergence rate between and , while converges with a rate of due to the singularity of .this numerical experiment suggests that the error does not depend on the regularity of ( at least in a preasymptotic regime ) .the triangle inequality implies .this upper bound is also plotted in figure [ f : pmpnumlshapedsingalpha_p1 ] .figure [ f : pmpnumlshapedsingcurltriang ] depicts adaptively refined meshes for with approximately 1000 degrees of freedom .the singularity of leads to a strong refinement towards the re - entrant corner .the marking with respect to the data - approximation ( in algorithm [ a : pmpafem ] ) is only applied at levels 15 , 7 , 12 , and 18 for .all other marking steps for use the drfler marking ( ) .errors and error estimators for the experiment with singular from subsection [ ss : pmpnumsingalpha ] . ] errors and error estimators for the experiment with singular from subsection [ ss : pmpnumsingalpha ] and uniform refinement . ]adaptively refined triangulations for the experiment from subsection [ ss : pmpnumsingalpha].,title="fig:",scaledwidth=32.0% ] adaptively refined triangulations for the experiment from subsection [ ss : pmpnumsingalpha].,title="fig:",scaledwidth=32.0% ] adaptively refined triangulations for the experiment from subsection [ ss : pmpnumsingalpha].,title="fig:",scaledwidth=32.0% ]the author would like to thank professor c. carstensen for valuable discussions .raviart and j. m. thomas . a mixed finite element method for 2nd order elliptic problems . in _ mathematical aspects of finite element methods ( proc ., consiglio naz .delle ricerche ( c.n.r . ) , rome , 1975 ) _ , pages 292315 .springer , berlin , 1977 .
this paper generalizes the non - conforming fem of crouzeix and raviart and its fundamental projection property by a novel mixed formulation for the poisson problem based on the helmholtz decomposition . the new formulation allows for ansatz spaces of arbitrary polynomial degree and its discretization coincides with the mentioned non - conforming fem for the lowest polynomial degree . the discretization directly approximates the gradient of the solution instead of the solution itself . besides the a priori and medius analysis , this paper proves optimal convergence rates for an adaptive algorithm for the new discretization . these are also demonstrated in numerical experiments . furthermore , this paper focuses on extensions of this new scheme to quadrilateral meshes , mixed fems , and three space dimensions . * keywords * non - conforming fem , helmholtz decomposition , mixed fem , adaptive fem , optimality * ams subject classification * 65n30 , 65n12 , 65n15
underwater acoustics can fulfil the needs of a multitude of underwater applications .this include : oceanographic data collection , warning systems for natural disasters ( e.g. , seismic and tsunami monitoring ) , ecological applications ( e.g. , pollution , water quality and biological monitoring ) , military underwater surveillance , assisted navigation , industrial applications ( offshore exploration ) , to name just a few . detection of hydroacoustic signals is characterized by a target probability of false alarm and probability of detection .the detection is performed for a buffer of samples , , recorded from the channel ( usually in a sliding time window fashion ) . in this paper , the focus is on detection of signals of known structure .the applications in mind are active sonar systems , acoustic localization systems ( e.g. , ultra - short baseline ) , and acoustic systems used for depth estimation , ranging , detection of objects , and communications . in this paper , we focus on the first step in the detection chain , namely , a binary hypothesis problem where the decoder differentiate between a _ noise - only _ hypothesis and a _ signal exist _ hypothesis .the former is when the sample buffer , , consists of ambient noise , and the latter is the case where the sample buffer also includes a distinct received hydroacoustic signal . without channel state information , the most common detection scheme is the matched filter , which is optimal in terms of the signal - to - noise ratio ( snr ) in case on an additive white gaussian channel .the matched filter detector is a constant false alarm rate ( cfar ) test , and its detection threshold is determined only by the target false alarm probability ( cf . ) . due to the ( possibly ) large dynamic range of the detected signal , and for reasons of template matching , the matched filter is often normalized by the noise covariance matrix .this normalization is often referred to as adaptive normalized matched filter ( anmf ) and is the preferred choice in several tracking applications such as gradient descent search , active contour models , and wavelet convolution . to estimate the noise covariance matrix , several noise - only training signalsare required . since this limits the application , and since the noise may be time - varying , various anmf detectors have been developed .based on the noise texture model , suggested a maximum likelihood estimator for the noise covariance matrix .alternatively , in an iterative procedure is performed where first the covariance matrix is assumed known and the test statistics for a signal vector is calculated .next , using these statistics and additional noise - only vectors , the noise covariance matrix is estimated and is substituted back into the test statistics . in , an adaptive matched subspace detector is developed and its statistical behavior is analyzed to adapt the detector to unknown noise covariance matrices in cases where the received signal is distorted compared to transmitted one .the above normalization methods of the matched filter require an estimation of the covariance matrix of the ambient noise .as shown in , mismatch in this estimation effects detection performance and target false alarm and detection rates may not be satisfied . since in underwater acoustics the noise characteristics are often fast time varying , an alternative detection scheme is to normalize the matched filter with the power of , .we refer to this scheme as the _ normalized matched filter _ ( nmf ) , as opposed to the anmf .the nmf detector does not require estimation of the noise covariance matrix .instead , its detection threshold depends only on the time - bandwidth product , , of the expected signal . for underwater applications which require detection at target performance in various noise conditions, the nmf may be a suitable choice . in , a low - rank nmfis suggested , where the linear matched filter is normalized by the power of the transmitted signal and a projection of the detected one .the projection is made according to the estimated noise covariance matrix , and the result is a simplified test which is proportional to the output of the standard colored - noise matched filter .a modification of the matched filter is proposed in for the case of a multipath channel .the works in and include analysis for the false alarm and detection probabilities of the nmf .this analysis is either a modification of a similar study of the nmf or is based on semi - analytic matrix representation . due to low signal - to - noise ratio and the existence of narrow band interferences ,hydroacoustic signals are constructed with a large time - bandwidth product of typical values .while the nmf has been analyzed before , for large the available expressions are computationally complicated to evaluate .consequently , it is difficult to evaluate the receiver operating characteristic ( roc ) , which is required to determine the detection threshold . as a result , most underwater applications avoid using the nmf as a detector . considering this problem and based on the probability distribution of the nmf and its moments , in this paper computationally efficient approximations for the probability of false - alarm and for the probability of detection for signals of large offered .this leads to a practical scheme for the evaluation of the roc .simulation results show that the developed expressions are extremely accurate in the large limit . to test the correctness of the analysis in real environment ,results from a sea experiment are reported .the experiment was conducted in the mediterranean sea to detect chirp signals reflected from the sea bottom at depth of 900 m. the reminder of this paper is organized as follows .the system model is presented in section [ sec : model ] . in section [ sec : distribution ] , we derive the probability distribution of the nmf and give expressions for the probability of false alarm and for the probability of detection .next , performance evaluation in numerical simulation ( section [ sec : simulation ] ) and results from the sea experiment ( section [ sec : experiement ] ) are presented in section [ sec : performance ] .finally , conclusions are drawn in section [ sec : conclusions ] .the notations used in this paper are summarized in table [ l : notation ] ..list of major notations [ cols= " < , < " , ]the goal of this paper is to offer a computational efficient determination of the detection threshold of the nmf for signals with large property .since the nmf is executed at the very first step of the reception chain , the receiver poses no information of the channel or range to transmitter .therefore , only an additive noise of unknown variance can be assumed for the system model . for a received signal , , we consider a binary detection test of hypotheses , in ( [ e : hypo ] ), is an hydroacoustic signal of bandwidth , duration , and is an additive noise .let us define the time - bandwidth product .we assume that is large ( values exceeding 50 are enough ) . in our analysiswe consider the case of real signals .however , as demonstrated in section [ sec : performance ] , the analysis holds for the case of complex signals .we are interested in the following quantity ( referred to as the nmf ) , where and are the sample of and , respectively , and is sampled equally at the nyquist rate . for a detection scheme which uses correlator ( [ e : nmf ] ) as its detection metric , the objective is to develop computational efficient expression for the probability of false alarm and for the probability of detection .both figures are required to determine the detection threshold through the roc .the strong assumption in this paper is of i.i.d zero - mean gaussian noise with variance . as discussed in section [ sec : simulation ] , effect of mismatch in the noise model is shown negligible .however , the case of coloured noise can be treated by including a trivial whitening mechanism in the filtering process .namely ( [ e : nmf ] ) becomes , where is the inverse correlation - matrix satisfying =\delta_{j , k} ] does not depend on or . in the following , and aretherefore refer to as normalized variables .the second moment of the sampled is =e\left[\frac{\sum\limits_{k , l}^{n}s_ks_l\cdot n_kn_l}{\left(\sum\limits_{m=1}^{n}n_m^2\right)}\right]\;. \label{e : var_sample3}\ ] ] to simplify ( [ e : var_sample3 ] ) , one can use the connection such that =\sum\limits_{k , l}^{n}s_ks_l\cdot e\left[\int_{0}^{\infty}n_kn_le^{-\lambda\sum\limits_{m}n_m^2}d\lambda\right]\;. \label{e : var_sample4}\ ] ] since is gaussian , so is the integral in ( [ e : var_sample4 ] ) and =\sum\limits_{k}^{n}s_k^2\cdot e\left[\int_{0}^{\infty}n_k^2e^{-\lambda\sum\limits_{m}n_m^2}d\lambda\right]\;. \label{e : var_sample5}\ ] ] consider .here , =\int_{-\infty}^{\infty}\frac{dn}{\sqrt{2\pi}}\int_{0}^{\infty}n^2e^{-n^2\left(\lambda+\frac{1}{2}\right)}d\lambda= \frac{\gamma\left(\frac{3}{2}\right)}{\sqrt{2\pi}}\int_{\frac{1}{2}}^{\infty}\frac{da}{a^{\frac{3}{2}}}=1\ ; , \label{e : var_example}\ ] ] where is used .the result in ( [ e : var_example ] ) is a good sanity check since for the case of a single sample , the variance of the nmf is 1 . for a general , =\sum\limits_{k}s_k^2\cdot\frac{1}{\left(2\pi\right)^{\frac{n}{2}}}\gamma\left(\frac{3}{2}\right)\pi^{\frac{n-1}{2}}\int_{\frac{1}{2}}^{\infty}\frac{da}{a^{\frac{3}{2}}a^{\frac{n-1}{2}}}= \frac{\sqrt{\pi}}{2^{\frac{n}{2}}}\cdot\frac{1}{n}\pi^{-\frac{1}{2}}2^{\frac{n}{2}}=\frac{1}{n}\;. \label{e : var_sample6}\ ] ] by ( [ e : var_sample6 ] ), the variance of for the case of noise - only signal is inverse proportional to .akyildiz , i. , pompili , d. , melodia , t. , sep .2006 . state - of - the - art in protocol research for underwater acoustic sensor networks . in :acm international workshop on underwater networks - wuwnet .association for computing machinery ( acm ) .http://dx.doi.org/10.1145/1161039.1161043 mason , s. , berger , c. , zhou , s. , willett , p. , dec .detection , synchronization , and doppler scale estimation with multicarrier waveforms in underwater acoustic communication .ieee j. select .areas commun . 26 ( 9 ) , 16381649 .http://dx.doi.org/10.1109/jsac.2008.081204 rangaswamy , m. , 2002 .normalized matched filter - a low rank approach . in : conference record of the thirty - sixth asilomar conference on signals , systems and computers .institute of electrical & electronics engineers ( ieee ) , pp .rouseff , d. , badiey , m. , song , a. , 2009 . effect of reflected and refracted signals on coherent underwater acoustic communication : results from the kauai experiment ( kauaiex 2003 ) .j. acoust .126 ( 5 ) , 23592366 .http://dx.doi.org/10.1121/1.3212925 scharf , l. , kraut , s. , mccloud , m. , 2000 .a review of matched and adaptive subspace detectors . in : ieee adaptive systems for signal processing , communications , and control symposium ( cat .no.00ex373 ) .institute of electrical & electronics engineers ( ieee ) , pp .http://dx.doi.org/10.1109/asspcc.2000.882451 scharf , l. , lytle , d. , jul .signal detection in gaussian noise of unknown level : an invariance application .ieee transactions on information theory 17 ( 4 ) , 404411 .younsi , a. , nadhor , m. , 2011 .performance of the adaptive normalised matched filter detector in compound - gaussian clutter with inverse gamma texture model .progress in electromagnetic research ( pier ) b 32 , 2138 .
detection of hydroacoustic transmissions is a key enabling technology in applications such as depth measurements , detection of objects , and undersea mapping . to cope with the long channel delay spread and the low signal - to - noise ratio , hydroacoustic signals are constructed with a large time - bandwidth product , . a promising detector for hydroacoustic signals is the normalized matched filter ( nmf ) . for the nmf , the detection threshold depends only on , thereby obviating the need to estimate the characteristics of the sea ambient noise which are time - varying and hard to estimate . while previous works analyzed the characteristics of the normalized matched filter ( nmf ) , for hydroacoustic signals with large values the expressions available are computationally complicated to evaluate . specifically for hydroacoustic signals of large values , this paper presents approximations for the probability distribution of the nmf . these approximations are found extremely accurate in numerical simulations . we also outline a computationally efficient method to calculate the receiver operating characteristic ( roc ) which is required to determine the detection threshold . results from an experiment conducted in the mediterranean sea at depth of 900 m agree with the analysis . underwater acoustics ; matched filter ; detection .
sunspots are the prominent features , on the solar photosphere , visible in white light .sunspots show a preferred latitudinal dependence which moves from higher latitude to towards the equator with the progress of the 11-year sunspot cycle .this migration pattern of the activity zone is known as ` butterfly diagram ' .similar to the preferred latitudinal belt , solar active longitudes refer to the longitudinal locations with higher activity compared to the rest of the sun .active longitudes in the past has been studied for solar like stars .there have been quite a few studies on the active longitudes , from the observational data as well as from the numerical simulations ( * ? ? ?* see , for a complete review ) .one of the earliest works on the solar active longitudes had been published from kodaikanal observatory by .other notable works were by .in recent times , ( and references therein ) showed a strong correlation between the active longitude and the high speed solar wind .most of the previous works showed the existence of active longitude on smaller time scales of 10 - 15 carrington rotations .using greenwich sunspot data , reported the presence of two active longitudinal zones which are persistent for more than 120 years .these two zones alter their activity periodically between themselves . apart from this` flip - flop ' like behavior , these authors have also shown that the active longitudes move as a rigid structure i.e the separation between the two active longitudes is roughly a constant value of 180 .migration of the active longitude in the sunspot , is studied by where the authors have found that the migration pattern is governed by the solar differential rotation . in this workthey have invoked the presence of a weak non - axisymmetric component in the solar dynamo theory in order to explain the observed longitudinal patterns .there are some criticism of these works too .have shown that some of the above quoted results may be an artefact of the methods used to derive them .active longitudes have also been discovered in other solar proxies .solar flares , specially proton flares are found be associated closely with the active longitude locations . existence of preferred longitudes in the near - earth and near - venus solar wind data .analyzing the x - ray flares observed with noaa / goes satellite , have shown the presence of active longitudes as well as their migration with time .however the differential rotation parameters obtained with the x - ray flares were found to be different from those obtained by using the sunspots in . using a combination of debrecen sunspot data and rhessi data, established a probable dependence of the flare occurrence with the active longitudes .in this paper we use the kodaikanal white - light digitized data for the first time and revisited the active longitude problem with multiple analysis approaches . in section [ data_description ]we give a brief description of the kodaikanal data which we have analyzed , using two recognized methods used in the literature . in section [ u_method ]we describe the rectangular grid method and the subsequent results from that .we also study the effect of the sunspot size distribution in this method as shown in section [ area_thresholding ] .the other method called as ` bolometric method ' has been described in section [ b_method ] .periodicities and the migration pattern in the active longitudes is described in section [ period ] and in section [ theory_mig ] respectively , followed by the ` summary and the discussion ' in the end .we have used the white - light digitized sunspot data from the kodaikanal observatory , india . the data period covers more that 90 years , starting from 1921 to 2011 .original solar images were stored on photographic plates and films and were preserved carefully in paper evelopes .these images has been recently digitized ( in 4k format ) by . using a modified stara algorithm on this digitized data , sunspot parameters like area , longitude , latitude etchave been extracted by ( henceforth paper- ) . apart from comparing the kodaikanal data with data from other observatories , in paper- we have also discussed about different distributions in sunspot sizes in latitude as well as in longitude . while detecting the sunspots , images of the detected sunspots were also saved in a binary format .panel ( a ) in figure [ kodai_context ] shows a representative full disc digitized white - light data .two rectangular boxes highlight the sunspots in two hemispheres present on that day .the binary image containing these detected sunspots is shown in panel ( b ) of figure [ kodai_context ] .carrington maps are the mercator projected synoptic charts of the spherical sun in carrington reference frame .we have used the daily detected sunspot images ( as shown in panel ( b ) of figure [ kodai_context ] ) to construct the carrington maps. a longitude band of 60 ( -30 to + 30 in heliographic coordinate ) is selected for each image to construct these maps ( following ) .this involves stretching , b angle correction ( b angle defines the tilt of the solar north rotational axis towards the observer .it can also be interpreted as the heliographic latitude of the observer or the center point of the solar disc ) , shift in the carrington grid and additions .one carrington map has been constructed considering a full 360 rotation of the sun in 27.2753 days . in order to correct for the overlaps, we have used the ` streak map' for every individual carrington map and divided the original maps with them .data gaps occur as black longitude bands in these maps .the whole procedure is shown in various panels of figure [ carr_context ] . herewe must emphasize the fact that in our kodaikanal data we have some missing days ( the complete list of missing days has been published with paper- ) . in order to increase the confidence on the obtained results , we have not considered any carrington map in our analysis which has one or more missing days .we use the generated carrington maps for our further analysis .two different methods , ` rectangular grid ' method and the ` bolometric curve ' method , are used as described in the following sub - sections .one should note that a possible drift of the active longitudes , due to the differential rotation of the sun , is not considered here but later analyzed in section 4.1 .first we follow the ` rectangular grid ' method ( following ) where a full carrington map has been divided in 18 rectangular strips , each of 20 longitudinal width .we then compute a quantity ` weight ' ( ) defined as where is the total sunspot area in the bin .we note down the longitudes of the highest and the second highest active bins and calculate the separation between them ( afterwards referred as ` longitude separation ' ) .we impose a minimum of 20% peak ratio between the second highest and highest peak in order to avoid any sporadic detection .two such representative carrington maps from kodaikanal white - light data archive is shown in panel ( a - b ) of figure [ u_bar ] and their corresponding barplots in panels ( c - d ) .for the two representative cases shown in the figure , we notice that for cr number 1799 the longitude separation is 180 whereas for cr number 1980 the difference is 20 .we compute such longitude separations for each and every carrington map for the whole hemisphere ( referred as ` full disc ' henceforth ) and for individual northern and southern hemispheres .histograms , constructed using these separation values for each of the three mentioned cases , are shown in different panels of figure [ u_histograms ] . in all three cases ( panels( a - c ) in figure [ u_histograms ] ) we see that the maximum occurrence is for the 20 separation .apart from that , we also notice peaks at and at for the full disc case whereas these peaks shifts a little bit for the northern and southern hemispheric cases .apart from these mentioned peaks , for the northern and southern hemispheres , we also see weak bumps in the histograms at and . in an earlier work using greenwich data , had reported a phase difference of 0.5 ( 180 in terms of longitude ) between the two most active longitude bins .the reason of this high number of occurrence , of the longitude separations , at is probably related to the longitudinal extent of the sunspots compared to the chosen bin width of 20 . to be specific , there are frequent cases when the largest sunspot or sunspot groups get shared by the two consecutive longitude bins .now these occurrences are on statistical basis and thus increasing the bin size only shifts the highest peak to the chosen bin value ( e.g for a 40 bin size we find the maximum at 40 ) .since this effect is related with the sunspot sizes , we thus use area thresholding on the sunspots and note down the longitude separations as described in the next section .we now use the area thresholding method on the sunspots found in every carrington maps .one such illustrative example is shown in figure [ t_demo ] .we again use the cr 1980 for demonstration as it has sunspots of various sizes .different panels in figure [ t_demo ] show the carrington map before and after doing different area thresholdings . after performing the area thresholding, we follow the same procedure as described in earlier sub - section to find the longitude separation between the two most active bins . following the extracted sunspots area distribution from the kodaikanal white - light data ( as shown in fig.7 in ) , we chose two kind of area thresholding values .first set of values correspond to sunspots with smaller sizes .these values range from 10 to 100 as shown in panels ( a - d ) in figure [ area_th ] . from this figure, we immediately notice that the height of the histogram peak at decreases progressively with the decrease of sunspot sizes .this is explained by the fact that as sunspot sizes go down , the probability of a sunspot being shared by two longitude bins also reduces .this result in a lower peak at .also , in every case , we notice prominent peaks in the histograms at and at separations .we next investigate the longitude separation for the sunspots with larger sizes . in this casethe thresholding values range from 100 to 650 . in figure[ area_th ] ( e - h ) we show the histograms of the longitude difference for these different thresholds .we see two noticeable differences in this case compared to the former one of small sunspot sizes .firstly , the peak height of the histograms increases ( relative to the other peak heights ) as we move towards the larger sunspots which is expected due to the reason we discussed earlier .along with that we notice that there is no peak near 90 as found earlier but a peak is still present along with other new peaks .here we should highlight the fact that with higher sunspot area thresholding , the statistics become poor and the peaks become less significant statistically . from our previous analysiswe saw that the discreteness introduced due to the longitude bins has a definitive effect on the calculated longitude separation .therefore , we explore the other method , the bolometric curve method , which produces a smooth curve as described below . in order to generate the smooth bolometric profile , we first invert the intensities of the carrington maps to make the white background black with sunspot as a bright feature .next , the map is stretched to convert sine latitude into latitude ( figure [ fig : bolometric]a ) . we then generate a limb darkening profile with the expression , profile = in latitude and longitude where is the cosine of heliocentric angle ( figure [ fig : bolometric]b ) .we then create an intermediate map by shifting the limb darkening profile , multiplying and adding with the intensity inverted carrington map along every longitude ( figure [ fig : bolometric]c ) . in the end , this intermediate map is added along latitude for each longitude to generate a factor , called .the curve is converted to a bolometric magnitude curve ( ) ( figure [ fig : bolometric]d ) using the expression defined as \label{b_equation}\ ] ] where bright surface temperature = 5750 and sunspot temperature = 4000 .figure [ b_image ] shows the two representative plots of this bolometric method .we chose the same two carrington maps as shown in figure [ u_bar ] for easy comparison between the two methods .now we see that for the cr 1799 , the bolometric curve basically traces the active bars ( as shown in panel ( c ) of figure [ u_bar ] ) i.e the minimum of the bolometric curve represent the locations of maximum spot concentrations .we notice that the separation in this case ( 179 ) equals to the separation obtained previously ( 180 ) but for the cr 1980 , the difference in this case is whereas previously obtained value was 20 .this is because the bolometric curve takes into account of close spot concentrations contradictory to the fixed longitudinal bins as defined in the rectangular grid method . in principleif we smooth out the peaks shown in panel ( d ) of figure [ u_bar ] , we should then arrive with a similar curve as of the bolometric one but the amount of smoothing is subjective and may be different for different carrington maps .however in the case of bolometric method , we must emphasize that the bolometric curve has been generated using a fixed prescription ( equation [ b_equation ] ) and thus it is free from any subjectivity issue .similar to the earlier method , in this case also we have calculated the longitude separation between the two most active spot concentrations for every carrington map and plotted the histograms as shown in figure [ b_histo ] .we can clearly see that for every case ( full disc , northern and southern hemisphere ) the histograms peak at . in each case , the histogram distribution look similar to a bell - shaped curve .we thus fit every distribution with a gaussian function as shown by the solid black lines in figure [ b_histo ] .the centers of the fitted gaussians , for the three cases , are at 178 , 180 , 176 .this agrees well with the results found by .apart from the well structured peak at , we also highlight the two other peaks ( though considerably weaker ) at and by two arrows in figure [ b_histo ] .we remind the reader here that the these peaks were also found from the rectangular grid method ( figure [ u_histograms ] ) .these two peaks at and probably arise due to the dynamic nature of the active longitude locations . apart from that, there could also have been some contributions due to the different sunspot sizes on the active longitude separations .next we investigate the occurrences of the peaks found in figure [ b_histo ] for every individual solar cycle ( cycle 16 to 23 ) .different panels of figure [ b_cycles ] show the longitude separation histograms for the full period as well as for the individual cycles .we notice that the separation peaks at 180 for every cycle and the height of this peak follow the cycle strengths i.e the strongest cycle , cycle 19 in this case , has the maximum number of occurrences at 180 and so on . also we notice that the two other peaks ( at and ) are also present in most of the cycles , though with lesser strengths .thus we confirm that these active longitudes persist for the whole 90 years of data analyzed in this paper .active longitudes have been shown to migrate with the progress of the solar cycle .also the the activity switches periodically between the two most active longitude zones .we investigate the same by using the longitude information of the maximum dip ( l ) using the ` bolometric curve ' method ( for an example , the 330 longitude of panel ( c ) of figure [ b_image ] ) .since we have rejected any carrington map which has a data gap ( due to missing days in the original kodaikanal data ) , we do not have a continuous stretch of l for more than 8 years .figure [ b_lc ] shows the time variation of the l for four different solar cycles for which we have minimum of 6 years of continuous values of l . to smooth out the small fluctuations , we have performed running averaging of 6 months ( following ) . from the plot, we clearly identify periodic variations in every light curves . to get a quantitative estimation of the periods, we use the wavelet tool .results from the wavelet analysis on the l light curves ( panels ( a - d ) of figure [ b_lc ] ) are shown in figure [ wave_obs ] . in all these plots , left panel shows the wavelet power spectrum and the right panel shows the global wavelet power which is nothing but the wavelet power at each period scale averaged over the time .the 99% significance level calculated for the white noise has been represented by the contours shown in the wavelet plot and by the dotted line plotted in the global wavelet plot .the effect of the edges represented by the cone of influence ( coi ) , has been shown as the cross - hatched region .obtained periods are indicated in the right hand side of each plots . herewe must highlight the fact that due to the shorter time length of the light curves ( 9 years ) the maximum measurable period in the wavelet ( due to coi ) is always 3.5 years .the global wavelet plots indicate two prominent periods of 1.3 years and 2.1 years .this means that the position of the most active bin moves periodically and these periods persist over all cycles investigated in this case .the occurrence of these two periods are particularly interesting as they have been found using the sunspot area time series from different observatories around the world .also the presence of these periods in all the cycle again confirm their connection with the global behavior of the solar cycle .we notice in figure [ b_lc ] that there is an average drift of the longitude of maximum activity with the progress of the cycles and this probably is connected with the solar differential rotation as explored in the following section .previous studies have shown that the migration of active longitudes is governed by the solar differential rotation . according to , the migration pattern can be easily explained if one uses the differential rotation profile suitably .thus , we move over to a dynamic reference frame defined by solar differential rotation as described below .the rotation rate of the longitude of activity for the carrington rotation can be expressed as , where denotes the sunspot area weighted latitude with and being /day and respectively ( following ) . using this rotation rate the longitudinal position of active longitude in carrington frame for the rotation ( ) can be calculated from the same at rotation ( ) through the relation defined as , with and days . from the longitudes ,phases are calculated as .these phases are made continuous ( figure [ fig : phase ] ) by minimizing with spanning over positive and negative integers . is replaced by for the which gives the minimum absolute difference mentioned .we calculate the missing phases to fill the gaps occurred due to missing carrington maps by interpolating over . from figure[ fig : phase ] , we see quite a few distinct features . immediately we recover the 11-year period of the solar cycle . also , we see that for an individual cycle the curve first steepens and then dips towards the later half of the cycle .this is explained by considering the fact that in the beginning of a solar cycle , sunspots appear at higher latitudes where the rotation rate is quite different from the carrington rotation rate ( which is basically the rotation rate at latitude ) . as the cycle progress , sunspots move down towards the equator and the curves then tends to flatten . .] cycle .sigma thresholded image of the same is shown in the bottom panel .overplotted red and green symbols indicate the phases as obtained using equation [ theory ] . ]though we call it a ` theoretical curve ' but we want to remind the reader that the area weighted latitude information is extracted for the generated carrington maps and thus it will be appropriate to call it a ` data driven theoretical curve ' .we use this ` theoretical curve ' to demonstrate the association of the migration of the active longitudes with the solar differential rotation . in the top panel of figure [ 19_cycle ]we plot the full disc bolometric profiles ( as obtained previously ) and stacked them over carrington rotation for the period 1954 to 1965 ( corresponds to 19 cycle ) .the two dark curves are the manifestation of the two dips corresponding to two active longitudes .to highlight this trend more , we use sigma thresholding ( i.e mean+ ) on the original image and plotted it in the bottom panel of figure [ 19_cycle ] .we then generate the theoretical curves , corresponding to two active longitudes as obtained from each bolometric profiles and overplotted them . a good match between the theoretical curve and the obtained active longitude positionsconfirms the fact that the migration of these active longitude is indeed dictated by the solar differential rotation .here we again highlight the fact that the missing phases have been filled using the interpolation method .since the current phase has contributions from the previous phases ( see equation [ theory ] ) , we could not match every details of the observed pattern for all the cycles . the small discrepancy between the theoretical curve and the datacould also be due to the fixed values of the differential rotation parameters used in this study which may not be suitable for all cycles .in this paper in the context of active longitude , we have analyzed , for the first time , the kodaikanal white - light digitized data which covers cycles 16 - 23 .we have analyzed the data with two previously known methods : ` rectangular grid ' method and the ` bolometric curve ' method for the full disc as well as for the individual hemispheres .below we summarize the key findings from the two methods : from the two methods , we see that for the entire duration of the data analyzed , we find two persistent longitude zones or ` active longitudes ' with higher activity .this is consistent with the results from the greenwich data as obtained by . using the ` rectangular grid ' method we have constructed the histograms of the longitude separation between the two active longitudes and found prominent peaks in the histograms at , and at .we also noted that the highest peak occurs for the separation of 20 .we use area thresholding on the sunspots to show that the peak at 20 is due to the presence of relatively large sunspots being shared by two consecutive longitude bins . using the bolometric method we recover the peaks at , and at as found earlier . also , we found that the peak height for the 180 separation is much higher than the other two peaks .we fitted the central lobe with a gaussian function and estimated the center location . applying this method for individual solar cycles we established that the peak at 180 is always present in every solar cycle . using temporal evolution of the peak location of highest activity we have demonstrated the presence of two periods using the wavelet analysis .the two prominent periods are 1.1 - 1.3 years and 2.1 - 2.3 years .these two periods are routinely found in the sunspot area and sunspot number time series .apart from that we also observe another period of years with significant power . due to shorter length of the time series ,this period is beyond the detection confidence level .however the presence of the period directly indicate its connection with the global solar dynamo mechanism which needs to be investigated further . finally , we use the solar differential rotation profile to construct a dynamic reference frame .a theoretical curve has been generated using area weighted sunspot latitude information from the carrington maps . while overplotting this curve on top of the sigma thresholded image of the bolometric profiles , we have shown that the migration pattern follows the solar differential rotation as found in some of the previous studies .to conclude , we found signatures of persistent active longitudes on the sun using the kodaikanal data .we hope that with these observational results along with the solar models , understanding of the physical origin of active longitudes can be advanced .the authors would like to thank mr .gopal hazra for his useful comments in preparing this manuscript .we thank the reviewer for his / her constructive comments and suggestions which improved the content and presentation of the paper .we would also like to thank the kodaikanal facility of indian institute of astrophysics , bangalore , india for proving the data .this data is now available for public use at http://kso.iiap .res.in/data ] .
the study of solar active longitudes has generated a great interest in the recent years . in this work we have used an unique continuous sunspot data series obtained from kodaikanal observatory and revisited the problem . analysis of the data shows a persistent presence of the active longitude during the whole 90 years of data duration . we compare two well studied analysis methods and presented their respective results . the separation between the two most active longitudes is found be roughly 180 for majority of time . additionally , we also find a comparatively weaker presence of separations at 90 and 270 . migration pattern of these active longitudes as revealed from our data is found to be consistent with the solar differential rotation curve . we also study the periodicities in the active longitudes and found two dominant periods of .3 years and .2 years . these periods , also found in other solar proxies , indicate their relation with the global solar dynamo mechanism .
the loewner equation is an important result in the theory of univalent functions that has found important applications in nonlinear dynamics , statistical physics , and conformal field theory . in its most basic formulation , the loewner equation is a first - order differential equation for the conformal mapping from a given ` physical domain , ' consisting of a complex region minus a curve emanating from its boundary , onto a ` mathematical domain ' represented by itself .usually , is either the upper half - plane or the exterior of the unit circle , but recently the loewner equation for the channel geometry was also considered .the loewner equation depends on a driving function , here called , that is the image of the growing tip under the mapping .an important development on the theory of the loewner equation was the discovery by schramm that when the driving function is a brownian motion the resulting loewner evolution describes the scaling limit of certain statistical mechanics models .this result spurred great interest in the so - called stochastic loewner equation .recently , the deterministic loewner equation was also used to study the problem of laplacian fingered growth in both the half - plane and radial geometries as well as in the channel geometry . in this case , the driving function has to follow a specific time evolution in order to ensure that the finger tip grows along the gradient lines of the corresponding laplacian field .the idea of using iterated conformal maps to generate aggregates was first deployed by hastings and levitov in the context of stochastic growth models , such as diffusion limited aggregation ; see , e.g. , for further developments along those lines .a deterministic version of the hastings - levitov model that is closely related to the loewner - equation approach albeit not using explicitly such a formalism was studied in . in this paperwe consider the problem of laplacian growth within the context of the loewner evolution , and present a new method of deriving the corresponding loewner equation for a broad class of growth models in the half - plane .our method is based on the schwarz - christoffel ( sc ) transformation between the mathematical planes and , where is an infinitesimal time interval . more specifically , the method consists of expanding the integrand of the sc formula in powers of the appropriate infinitesimal quantity ( related to ) and then performing the integrals up to the leading - order term .our method correctly yields the loewner evolution for the case of slit - like fingers studied before .more importantly , the method is able to handle more general growth problems , so long as the growth rule can be specified ( in the mathematical plane ) in terms of a polygonal curve , in which case the schwarz - christoffel transformation can be used .an example is given for the case of a bubble growing from the real axis into the upper half - plane .in order to set the stage for the remainder of the paper , we wish to begin our discussion by considering the simplest loewner evolution , namely , that in which a curve starts from the real axis at and then grows into the upper half--plane , where the curve at time is denoted by and its growing tip is labeled by .now let be the conformal mapping that maps the ` physical domain , ' corresponding to the upper half--plane minus the curve , onto the upper half - plane of an auxiliary complex -plane , called the ` mathematical plane , ' i.e. , we have , where with the curve tip being mapped to a point on the real axis in the -plane ; see fig .[ fig . 1 . ] .furthermore , we consider the growth process to be such that the accrued portion of the curve from to , where is an infinitesimal time interval , is mapped under to a vertical slit in the mathematical -plane ; see fig .[ fig . 1 . ] .the mapping function must also satisfy the initial condition since we start with an empty upper half - plane .we also impose the so - called hydrodynamic normalization condition at infinity : these conditions specify uniquely the mapping function .-plane and the mathematical - and -planes at times and , respectively , for a single finger in the upper half - plane .the mapping maps the curve ( at time ) onto a segment of the real axis on the -plane , whereas the accrued portion of the curve ( during time interval ) is mapped to a vertical slit .the mapping is obtained as the composition of and the slit mapping ; see text.,scaledwidth=60.0% ] from a more physical viewpoint , the problem formulated above belong to the class of laplacian growth models where an interface evolves between two phases driven by a scalar field , representing , for example , temperature , pressure , or concentration , depending on the physical problem at hand . in one phase , initially occupying the entire upper half - plane , the scalar field satisfies the laplace equation whereas in the other phase one considers =const ., say , with the curve representing a finger - like advancing interface between the two phases .( here the finger is assumed to be infinitesimally thin . ) the complex potential for the problem can then be defined as , where is the function harmonically conjugated to . on the boundary of the physical domain ,consisting here of the real axis together with the curve , we impose the condition , whereas at infinity we assume a uniform gradient field , , or alternatively , from this point of view , the mapping function introduced above corresponds precisely to the complex potential of the problem .in particular , the fact that in the -plane the curve grows along a vertical line implies that the finger tip grows along gradient lines in the -plane . to specify completely a given physical model ,one has also to prescribe the interface velocity , which is usually taken to be proportional to some power of the gradient field : . for most of the problems considered herethe specific velocity model is not relevant , in the sense that the finger shapes will be independent of the exponent , which only affects the time scale of the problem .( however , there are situations , such as the case of competing asymmetrical fingers , where different s may yield different patterns . ) for convenience of notation , we shall represent the mathematical plane at time as the complex -plane and so we write .now consider the mapping , from the upper half--plane onto the mathematical domain in the -plane ; see fig .[ fig . 1 . ] .the mapping function can then be given in terms of as where is the inverse of .the above relation governs the time evolution of the function and naturally leads to the loewner equation . a standard way of showing this is to construct the slit mapping explicitly , substitute its inverse in ( [ eq:1 ] ) , and then take the limit .one then finds the loewner equation where is the so - called growth factor which is related to the tip velocity .one can show that , where is the inverse of . here, however , the specific form of is not relevant since we can always rescale the time coordinate in ( [ loewner ] ) so as to set . from symmetry , one also gets so that const . , which implies that the tip simply traces out a vertical line in the -plane . in more general situations , such as the case of multiple fingers ,the function can no longer be obtained in closed form and so one has to resort to alternative approaches to derive the loewner evolution .previous methods consider a series of compositions of the basic one - slit mapping . here , however , we will apply a more direct method based on the schwarz - christoffel transformation to obtain the loewner equation for slit - like fingers in the half - plane . in the next section , our method will be extended to treat the case of a growing bubble in the upper half - plane . to illustrate how the method works ,let us first use it to derive the loewner equation ( [ loewner ] ) for a single finger .we begin by inverting ( [ eq:1 ] ) so as to write the mapping from the upper half--plane onto the upper half--plane with a vertical slit ( see fig . [ fig. 1 . ] ) is easily found by a direct application of the schwarz - christoffel formula .one then finds where .note from fig .[ fig . 1 . ] that the parameter is related to the ( infinitesimal ) height of the slit in the -plane .the above integral can be performed exactly , as already mentioned , but here we take an alternative approach , namely , we first expand the integrand in ( [ dois ] ) in powers of and then compute the relevant integrals afterwards . to do that we first rewrite ( [ dois ] ) in the form ^{-2 } } } + a(t).\ ] ] after expanding the integrand in powers of and performing the relevant integrals , one obtains that , up to order , equation ( [ eq : gt ] ) becomes now expanding this equation up to the first order in , dividing by , and then taking , one gets where using the boundary condition , which follows from ( [ tres ] ) , yields precisely the loewner equation ( [ loewner ] ) together with the condition ( [ eq : dota ] ) . herewe consider the case of multiples fingers , , growing from the real axis into the upper half--plane , as shown in fig .2 . ] . as before ,the map maps the physical domain in the -plane onto the upper half--plane and the tips are required to grow along gradient lines , so that the accrued portions of the curves during an infinitesimal time are mapped under to vertical slits emanating from the real axis ; see fig [ fig .2 . ] .the mapping , from the upper half--plane to the upper half--plane with vertical slits , can again be easily obtained from the schwarz - christoffel transformation : where for a given .[ we remark parenthetically that in writing ( [ eq:8 ] ) we have assumed , for simplicity , that the slits in the -plane are mapped under onto symmetrical segments on the real -axis ; see fig [ fig .2 . ] . rigorously speaking ,this is valid only in the limit that the slit heights become vanishingly small , that is , when , which is the relevant limit for us here . ]the integral in ( [ eq:8 ] ) can not be performed exactly for arbitrary , hence in order to obtain the loewner equation for this case we first need to expand the integrand in powers of the infinitesimal parameters and then proceed with the integration .note , however , that each term in ( [ eq:8 ] ) is of the same form as that appearing in ( [ dois ] ) for the case of a single finger .we can thus build upon our experience with that case to treat the present situation .in particular , we notice that the mixed terms involving different s in the expansion of the integrand in ( [ eq:8 ] ) are all of orders higher than and hence need not be considered , for they do not contribute to the final result in the limit .thus , to the extent that the mixed terms can be neglected , we can rewrite ( [ eq:8 ] ) as ^ 2 } } + a_j(t ) .\label{eq:8b}\ ] ] now repeating the exactly same procedure used for the single finger case , see eqs .( [ eq : gt])-([eq : dg ] ) , one readily obtains where after using the condition in ( [ eq : dgt ] ) , we get the loewner equation for multiple curves with the time evolution of the points being given by the following system of ordinary differential equations if the growth factors are all the same , we can again rescale the time variable so as to set .in particular , in the case of two symmetrical fingers ( i.e. , ) , equation ( [ eq:11 ] ) can be integrated exactly to yield the mapping function , from which the finger shapes can be computed analytically . a related exact solution for two fingerswas obtained in .an alternative derivation of ( [ eq:11 ] ) was given elsewhere using a composition of single - slit mappings .our method , however , is somewhat more direct in the sense that it considers a single mapping with slits as shown in fig [ fig .2 . ] . in the next sectionwe will extend our method to include the case of a growing bubble in the half - plane .is mapped under to a tent - like shape ; see text for details.,scaledwidth=60.0% ] here we consider the problem of an interface starting initially from a segment , say , ] on the real axis .the different curves represent the interface at time intervals of ., scaledwidth=60.0% ] in fig .we show numerical solutions of ( [ eq:14 ] ) with for various final times , starting from up with a time separation of between successive curves . to generate the curves shown in fig .[ fig . 5 . ]we used the numerical scheme described in .more specifically , we start with a ` terminal condition ' , for $ ] , and integrate the loewner equation ( [ eq:14 ] ) backwards in time , using a runge - kutta method of second order , to get the corresponding point on the interface .( see also for a recent review on numerical integration of the loewner equation . ) from fig .one sees that the bubble initially grows somewhat slowly and then rapidly expands and tends to occupy the whole plane for large times .in fact , one can show that ( for ) the tip velocity grows exponentially with time as .it is possible to modify the growth factor so to have the tip velocity related to the gradient of the field , as discussed in sec .[ sec:2 ] , but this does not change the interface shapes and only alters the time scale of the bubble evolution .we have presented a novel method to derive the loewner equation for laplacian growth problems .the method is based on the schwarz - christoffel ( sc ) transformation and consists of expanding the integrand of the sc formula in the appropriate infinitesimal parameter , performing the relevant integrals , and then taking the limit in which the infinitesimal parameter goes to zero .our method is able to reproduce the loewner evolution for the problem of slit - like fingers in both the half - plane ( sec .[ sec:2 ] ) and the channel geometry ( not shown here ) .furthermore , the method can be extended to treat more complicated growth problems , so long as the growth dynamics in the complex - potential plane can be specified in terms of a polygonal boundary , in which case the schwarz - christoffel transformation can be used .we note however that the requirement that the growth rule be formulated in terms of a polygonal curve is not as restrictive as it seems , for any simple curve can in principle be approximated by a piecewise linear function .[ such more general growth models are currently under investigation . ] in particular , we obtained the loewner equation for a novel situation in which a bubble grows from a segment of the real line into the upper half - plane .although in this case we refer to the evolving interface as a growing ` bubble , ' in contrast to the slit - like ` fingers ' of sec .[ sec:2 ] , this terminology should not be taken literally .depending on the physical problem at hand , such growing interface may represent , say , an expanding front in combustion experiments or in electrochemical deposition .of course , further work is necessary to relate more directly the growth models discussed here to experiments .this work was supported in part by the brazilian agencies finep , cnpq , and facepe and by the special programs pronex and ct - petro .30 lwner k 1923 _ math .ann . _ * 89 * 103 .
the problem of laplacian growth is considered within the loewner - equation framework . a new method of deriving the loewner equation for a large class of growth problems in the half - plane is presented . the method is based on the schwarz - christoffel transformation between the so - called ` mathematical planes ' at two infinitesimally separated times . our method not only reproduces the correct loewner evolution for the case of slit - like fingers but also can be extended to treat more general growth problems . in particular , the loewner equation for the case of a bubble growing into the half - plane is presented .
growing evidence suggests that metabolism is a dynamically regulated system that reorganizes under evolutionary pressure to safeguard survival .this adaptability implies that metabolic phenotypes directly respond to environmental conditions .for instance , unicellular organisms can be stimulated to proliferate by controlling the abundance of nutrients available . in rich media , cells reproduce as quickly as possible by fermenting glucose , a process which produces high specific growth rates as well as large quantities of excess carbon in the form of ethanol and organic acids , a process known as the crabtree effect . to survive the scarcity of nutrients during starvation periods ,glycolysis is hypothesized to switch to oxidative metabolism , which no longer maximizes the specific growth rate , but instead the atp yield needed for cellular processes .cells of multicellular organisms show similar metabolic phenotypes , relying primarily on oxidative phosphorylation when not stimulated to proliferate and changing to non - oxidative glycolytic metabolism during cell proliferation , even if this process known in cancer cells as the warburg effect is much less efficient at the level of energy yield .these metabolic phenotypes are captured by computational approaches like flux balance analysis ( fba ) that has been applied to high - quality genome - scale metabolic network reconstructions to estimate the fluxes of biochemical reactions at steady state .compliant with stoichiometric mass balance constraints and with imposed upper and lower bounds for nutrients , fba determines the flux distribution that optimizes a biological objective such as specific growth rate , biomass yield , atp yield or the rate of production of a biotechnologically important metabolite .this important tool has been used to predict the growth rate of organisms and to analyze their viability .minimization of metabolic adjustment ( moma ) , which identifies a single suboptimal point in flux space , has been proposed as an alternative option for perturbed metabolic networks not exposed to long - term evolutionary pressure . in any case , the identified solutions are frequently inconsistent with the biological reality since no single objective function describes successfully the variability of flux states under all environmental conditions , and in fact the highest accuracy of fba predictions is achieved whenever the most relevant objective function is tailored to particular environmental conditions according to the empirical evidence for a very specific metabolic phenotype .for instance , fba maximization of growth rate , by far one of the most common assumption , requires either a rich medium or a manual limitation of the oxygen uptake to a physiological enzymatic limit to mimic the observed fermentation of glucose to formate , acetate , or ethanol typical of proliferative metabolism , while in minimal medium optimization of growth rate relies primarily on oxidative phosphorylation , which increases atp production converting glucose to carbon dioxide , as in starvation metabolism . along optimal metabolic phenotypes, there is however a whole space of possible states non - reachable by invoking optimality principles that prevent non - optimal or typical biological states .optimization of a biological function in the absence of _ a priori _ biological justification , like what happens under conditions for proliferative or starvation metabolism , may indeed weaken _ in silico _ predictions .elementary flux modes non - decomposable steady - state pathways through a metabolic network such that any possible pathway can be described as a non - negative linear combination of them provide a view on the flux space without the requirement of any optimality function . however , calculation of all elementary flux modes for an entire network is computationally very demanding due to the combinatorial explosion of their number with increasing size of the network .for instance , the core metabolism of _ escherichia coli _ in consists of around 271 million elementary flux modes . to overcome this handicap , recent advances avoid the comprehensive enumeration of elementary flux modes using instead a sample of the available elementary flux mode solution space .even admitting one is able to enumerate all elementary flux modes , it is however impossible to assess the likelihood of observing a given linear combination of them in a typical phenotype .further , elementary flux modes can not capture changes associated with reaction fluxes being capped for whichever physiological reason ( fig .s6 in supporting information ( si ) ) . on top , due to functional redundancy , the expansion of possible metabolic pathways in elementary flux modes is not unique . therefore , enumeration of the elementary flux modes is not as insightful as characterizing the whole phenotypic space , albeit requiring a comparable computational complexity . here, we introduce an alternative approach that estimates directly the feasible flux phenotypic ( ffp ) space using a mathematically well characterized sampling technique which enables the analysis of feasible flux states in terms of their likelihood .we use it to confront optimal growth rate solutions with the whole set of feasible flux phenotypes of _ escherichia coli _( _ e . coli _ ) core metabolism in minimal medium .the ffp space provides a reference map that helps us to assess the likelihood of optimal and high - growth states .we quantitatively and visually show that optimal growth flux phenotypes are eccentric with respect to the bulk of states , represented by the feasible flux phenotypic mean , which suggests that optimal phenotypes are uninformative about the more probable states , most of them low growth rate .we propose feasible flux phenotypic space eccentricity of experimental data as a standard tool to calibrate the deviation of optimal phenotypes from experimental observations .finally , the analysis of the entire high - biomass production region of the feasible flux phenotypic space unveils metabolic behaviors observed experimentally but unreachable by models based on optimality principles , which forbid aerobic fermentation -a typical pathway utilization of proliferative metabolism- in minimal medium with unlimited oxygen uptake .the ffp space , also termed the flux cone , of a metabolic model in a specific environment has been explored using different sampling techniques . here, we use the hit - and - run ( hr ) algorithm to explore the ffp space , tailoring it to enhance its sampling rate and to minimize its mixing time .we refer the interested reader to , where our implementation was first introduced , stating here only the key points and ideas .we start by noticing that all points in the ffp space must simultaneously satisfy mass balance conditions and uptake limits for internal and exchanged metabolites , respectively .the former requirement defines a set of homogeneous linear equalities , whose solution space is , while the latter defines a set of linear inequalities , whose solutions lie in a convex compact set . from a geometrical point of view , the ffp space is thus given by the intersection .a key step of our approach consists in realizing that one can directly work in by sampling in terms of a basis spanning .this allows to retrieve all ffps that satisfy mass balance in the medium conditions under consideration , without rejection .additionally , sampling in allows to perform a drastic dimensional reduction and to decrease considerably the computation time . indeed , assuming to have reactions , internal metabolites , and exchanged metabolites ( ) , one has that , which is typically a space with greatly reduced dimensionality with respect to . once a basis for is found , the main idea behind hr is fairly simple .given a feasible solution , a new , different feasible solution can be obtained as follows : 1 .choose a random direction in 2 .draw a line through along direction : 3 .compute the two intersection points of with the boundary of , parametrized by : 4 . choose a new point from , uniformly at random between and . in practice , this implies choosing a value in the range uniformly at random , and then this procedure is repeated iteratively so that , given an initial condition , the algorithm can produce an arbitrary number of feasible solutions ( see fig .s4 in si for an illustrative representation of the algorithm ) . the initial condition , which must be a feasible metabolic flux state itself ( _ i.e. _ it must belong to ) ,is obtained by other methods .we used and recommend minover , see , but any other technique is valid .in particular , in cases where small samples of the ffp space have been already obtained by other sampling techniques , such points can be used to feed the hr algorithm and produce a new , larger sample .it was proven that hr converges towards the uniform sampling of and we took several measures to ensure that this was the case in our implementation ( fig .s4 in si ) . for each model , we initially created samples of size , giving rise to a final set of feasible solutions , uniformly distributed along the whole ffp space .compared to phenotypic optimisation or , _e.g. _ , elementary flux modes , ffp sampling has the advantage of allowing the computation of reaction pairs correlations .these may be exploited to detect how global flux variability emerges in the system through principal component ( pc ) analysis and to quantify , in turn , the closeness of optimal phenotypes to the bulk of the ffp . in what followswe briefly describe the method , while an illustrative example is provided in fig .s5 in si .to perform such study , we start by writing down the matrix of correlations between all reaction pairs . in doing this, we measure how much the variability of reaction flux affects the flux ( and viceversa ) . in mathematical terms , for each pair of reactions , we have : where denotes an average over the sampled set and the denominator of the fraction is simply the product of the standard deviations of and .we plot such matrix in fig.[fig1]e .matrix is real and symmetric by definition and , thus , diagonalizable .this means that , for every eigenvector , one has .note that matrix describes paired flux fluctuations , in a reference frame centered on the mean flux vector . the eigenvectors of express , in turn , the directions along which such fluctuations are taking place .in particular , the eigenvectors associated with the first two largest ( in modulo ) eigenvalues dictate the two directions in space where the sampled ffp displays the greatest variability ( see fig .s5 in si ) .this implies that sampled phenotypes lie closer to the plane spanned by and than the ones produced by any other linear combination of eigenvectors . projectingall sampled ffp onto this plane allows thus to perform a drastic dimensional reduction yet retaining much of the original variability and allows to have a direct graphical insight on where phenotypes lie , on where the bulk of the ffp is located and on how the fba solution compares to them . in such plot , each phenotype is described by two coordinates , that may be parameterized via a radius and an angle .since the projection is normalised , it follows that .furthermore , the closer to one , the better the phenotype is described by only looking at variability along . as is one at the most and since we haveso many phenotypes clustered together , we chose to plot the pcaprojection by using an effective radius in fig .[ fig1]f . in this waywe could better discriminate among different phenotypes and got a ` closest to the origin , closest to the ' setup . as compared to previous works focused on characterizing the principal components of the solution space to obtain a low - dimensional decomposition of the steady flux states of the system , our approach presents two main conceptual differences .first , the sampling method used here produces a uniform sample over the full set of feasible flux states without introducing any bias towards high - growth flux states .second , we aim at a full description of all feasible flux states to conduct a statistical analysis of feasible phenotypes , which can not be done by only retaining pcs .we use pc analysis to visualize the eccentricity of the fba solution , but for all other purposes we take into account the whole set of metabolic states .we study the full metabolic flux space of the _ e. coli _core metabolic model , a condensed version of the genome - scale metabolic reconstruction _i_af1260 that contains central metabolism reactions and metabolites .this network is complemented with the biomass formation reaction and the atp maintenance reaction . as in fba ,feasible flux states of a metabolic network are those that fulfill stoichiometric mass balance constraints together with imposed upper and lower bounds on the reaction fluxes .these constraints restrict the number of solutions to a compact convex set which contains all possible flux steady states in a particular environmental condition . in glucoseminimal medium , the ffp space of _ e. coli _ core metabolism is determined by potentially active reactions , including biomass formation and atp maintenance reaction , and metabolites .note that we allow negative values for reversible reactions .we apply a fast and efficient hit - and - run algorithm ( see materials and methods ) that explores the full solution space at random to produce a raw sample of feasible states from which we extract a final uniform representative set of feasible states .notice that our approach is suitable for genome - scale network sizes beyond the reduced size of the _ e. coli _ core model .there is not any fundamental or technical bottleneck that prevents its application to complete metabolic descriptions at the cell level since uniform samples can also be generated in genome - scale networks .we used the _e. coli _ core metabolism due to a matter of computational time and ease of visualization . from the sampled set of _core metabolic states in minimal medium of glucose bounded to mmol/(gdw ) , we collected the metabolic flux profile of each individual reaction as the set of its feasible metabolic fluxes . from such profile , we computed the probability density function which describes the likelihood for a reaction to take on a particular flux value .as an example , see fig .[ fig1]a for the biomass function .we observe a variety of shapes ( fig .s1 in supporting information ( si ) ) , all of them low - variance , most displaying a maximum probability for a certain value of the flux inside the allowed range ( notice that none of these histograms can have more than one peak due to the convexity of the steady - state flux space ) , and many being clearly asymmetric . ) and second ( ) principal components .the plot is in polar coordinates , with the negative logarithm of the radius .the majority of points lies in a circle close to the origin ( the darker area ) .the fba solution ( green circle ) is , conversely , rather eccentric.,scaledwidth=70.0% ] to characterize the dispersion of the possible fluxes for each reaction , we measured its coefficient of variation calculated as the ratio between the standard deviation of possible fluxes and their average ( table s1 in si ) . for all but three reversible reactions ( malate dehydrogenase , glucose-6-phosphate isomerase , and glutamate dehydrogenase ) , the only reversible reactions having a low associated flux mean and thus a higher , this metric is below one and when ranked for all reactions it steadily decreases to almost zero , fig .interestingly , we find that this coefficient is significantly anticorrelated with the essentiality of reactions , as observed experimentally ( point - biserial correlation coefficient with p - value ) .this means that essential reactions tend to have a highly concentrated profile of feasible fluxes . besides , and only for the glucose transferase reaction glcpts , we find a zero probability of having a zero flux , which is not surprising as the lower bound given by fva is strictly greater than zero indicating that this reaction is essential for _ e. coli _ core metabolism in glucose minimal medium .the asymmetry of each profile was characterized by the distance between the more probable flux in the ffp space and the lower flux bound of the flux variability range rescaled by the flux variability range of the reaction ( table s1 in si ) . in fig .[ fig1]c we show a scatterplot of values for all core reactions .strikingly , the rescaled distances cluster in three regions around , and forming groups of sizes , and respectively .this indicates that the most probable flux is close to either the lower or upper bound or , conversely , the probability distribution function tends to be quite symmetric .moreover , we also observe an anticorrelation between the length of the flux range and the position of the most probable flux , so that the closer is this to its maximum value the shorter is the allowed range of fluxes . in order to assess the likelihood of fba maximization of the biomass reaction ( fba - mbr ) ( or equivalently of the growth rate ) solutions in relation to typical points within the whole ffp space ( typical , in our mathematical / computational context , means statistically representative in relation to the whole set of flux states contained in the ffp space ) , we calculated the average flux value for each reaction , that we named the mean , and compared it to the fba optimal biomass production flux .the complementary cumulative distribution function of the distances between these two characteristic fluxes rescaled by the flux variability range of reactions is shown in fig .[ fig1]d ( table s1 in si ) .we observe a broad distribution of values over several orders of magnitude with no mean value actually very close to the fba maximal solution except for a few reactions , which typically work at maximum growth . at the other end of the spectrum ,deviated reactions include for instance excretion of acetate and phosphate exchange . as a summary ,we conclude that the mean and the fba biomass optimum are rather distant , which suggests that fba optimal states are uninformative about phenotypes in the bulk of states in the ffp space . to visualize neatly the eccentricity of the fba maximum growth state with respect to the bulk of metabolic flux solutions , we used principal component analysis in order to reduce the high - dimensionality of the full flux solution space projecting it onto a two - dimensional plane from the most informative viewpoint ( see materials and methods ) .we took reaction profiles in pairs to calculate the matrix of pearson correlation coefficients measuring their degree of linear association fig .[ fig1]e ( table s3 in si ) .note that an ordering of reactions by pathways allows to have a clear visual feedback of intra- and inter - pathway correlations taking place in the core metabolic network , such that clusters of highly correlated reactions appear as bigger darker squares .the two axes of our projection correspond to the two first principal components of this profile correlation matrix and , which account for most of the variability in profile correlations .each sampled metabolic flux state was rescaled as a z - score centered around the mean and projected onto these axes , as shown in the scatterplot fig .[ fig1]f in polar coordinates , where we applied a negative logarithmic transformation to the radial coordinate for ease of visualization .we see that the majority of phenotypes have a radius close to zero .since points closer to the origin are better described by the two principal components , this implies that and capture the largest variability of the sampled ffp .clearly , the fba optimal growth solution is rather eccentric with respect to typical solutions , with an associated radius of in this representation .in fact , of states have a smaller radius than the optimal growth solution ( see fig .s2 in si ) .we focus on the relationship between primary carbon source uptakes and oxygen need to illustrate the potential of the ffp space as a benchmark to calibrate the deviation of _ in silico _ predicted optimal phenotypes from experimental observations . sampled ffp states of _ e. coli_ core model , in particular ffp mean values as a function of the upper bound uptake rate of the carbon source , are compared with reported experimental data for oxygen uptakes in minimal medium with glucose , pyruvate , or succinate as a primary carbon source , fig .we also included in the figures the line of optimality representing fba optimal growth solutions .we used glucose experimental data points from , experimental results for pyruvate reported in , and experimental results in for the quantitative relationship between oxygen uptake rate and acetate production rate as a function of succinate uptake rate .mmol/(gdw ) . *( b ) * oxygen vs. pyruvate uptake rates , experimental data from . the ffp space is sampled with pyruvate bounded to mmol/(gdw ) . *( c ) * oxygen vs. succinate uptake rates , experimental data from .the ffp space is sampled with succinate bounded to mmol/(gdw ) ._ inset _ acetate production rate vs. succinate uptake rate , experimental data from .,scaledwidth=45.0% ] in all cases , fba - mbr reproduces well experimental data points in the low carbon source uptake region , where _ e. coli _ is indeed optimizing biomass yield .however , oxygen uptake rate saturates after some critical threshold of carbon source uptake rate , which depends on the carbon source , reaching a plateau which , among other possibilities , could be explained by the existence of a physiological enzymatic limit in oxygen uptake that lessens the capacity of the respiratory system . the plateau levels are mmol/(gdw ) for glucose , mmol/(gdw ) for pyruvate , and mmol/(gdw ) for succinate . in this region of high carbon source uptake ,fba - mbr predicts an oxygen uptake overestimated by around with respect to the values reported from experiments .while this amount is in principle large , the ffp space gives a standard that helps to calibrate it .we measured the eccentricity of experimental observations as their distance to the ffp mean . for glucose , this value is , which makes the distance of between the fba - mbr prediction and experimental data relatively low , fig .the distance of between the fba - mbr prediction and experimental data is slightly worse for pyruvate , fig .[ fig2]b , in which case the eccentricity of experimental observations is of .the disagreement between optimality predictions and experimental data is much more significative in the case of succinate , fig .[ fig2]c , for which the eccentricity of experimental observations is only of , while the distance between the fba - mbr prediction and experimental data is of , meaning that the ffp mean is indeed more adjusted to observations .the case of acetate production for this carbon source is even more conspicuous , fig .[ fig2]c _ inset_. while fba - mbr is still reproducing well the experimental results of no acetate production in the low succinate uptake region , it can not predict production of acetate at any succinate uptake rate due to the fact that fba - mbr in minimal medium with unlimited oxygen does not capture the enzymatic oxygen limitation .the fba - mbr solution diverts resources to the production of atp entirely through the oxidative phosphorylation pathway .thus , it fails to reproduce experimental observations of acetate production in the region of high succinate uptake rates .in contrast , most metabolic states in the ffp space are consistent with acetate production , so that in this case the ffp mean turns out as a good predictor of the experimentally observed metabolic behavior . in summary ,while fba - mbr predictions seem accurate for low carbon source uptake rate states in minimal medium as seen previously , the experimental points diverge from the fba - mbr prediction state when increased values of carbon source uptake rates are considered .note that , in general , it is not straightforward to quantify the significance of the divergence . here , we propose to use the ffp space as a reference standard . according to this calibration, we remarkably find that fba optimal growth predictions of oxygen needs versus glucose , pyruvate , or succinate uptake are worse the more downstream the position of the carbon source into catalytic metabolism .using the _ e. coli _ core metabolism , we have checked that the ratio of the maximum atp production rate to the maximum oxygen uptake ( both calculated by fba optimization of atp production rate ) for the three carbon sources glucose , pyruvate , and succinate are respectively , , and , so this ratio decreases as more downstream in the catalytic metabolism .these results are consistent with values reported in .fba privileges energy production by diverting fluxes to oxidative phosphorylation providing maximum energy for growth , so that fba should work worse the less effective the oxidation of the carbon source is for atp synthesis .this can be explained in terms of departures of energy from substrate catabolism to functions other than growth , like basal maintenance , which become more relevant in relative terms as compared to the total energy production when the energy - to - redox ratios of the carbon substrate are lower .we resampled the high - growth metabolic region of the _ e. coli _ core metabolism ffp space in glucose minimal medium with glucose upper bound of mmol/(gdw ) , as in subsection 2a .we defined this region by setting a minimal threshold for the biomass production of mmol/(gdw ) , and produced a sampled with a final size of states .we note that phenotypes in this high - growth sample remain very close to the biomass yield threshold due to the exponential decrease of the number of feasible flux states with increased biomass production , as in the biomass flux profile in fig .[ fig1]a . in this region, we identified pathway utilization typical of proliferative microbial metabolism , even when considering a minimal medium and unlimited oxygen uptake . this metabolic behavior is consistent with experimental data but it is unreachable by fba models based on optimality principles ( unless optimization is accompanied by auxiliary constraints not assumed in standard fba implementations , like the solvent capacity constraint , or by modelization beyond stoichiometric mass balance , introducing for instance thermodynamically feasible kinetics or enzyme synthesis ) .we checked that the by - products can not be explained by fba - mbr in minimal medium with unlimited oxygen supply since , in this optimization framework , metabolic fluxes are basically forced to atp production through oxidative phosphorylation with excretion of co as waste .however , increasing the oxygen limitation in fba - mbr results in secretion of formate , acetate , and ethanol in that order , with corresponding shifts in metabolic behavior . according to the ffp space of _ e. coli _ core metabolism , we observe that the high - biomass production ffp subsample is characterized by the secretion of small organic acid molecules , even when the supply of oxygen is unlimited .this fact points to the simultaneous utilization of glycolysis and oxidative phosphorylation to produce biomass and energy and so to suboptimal states .this observation is supported by results from - metabolic flux analysis in _e. coli _ , where repressed oxidative phosphorylation was proposed as responsible for the measured submaximal aerobic growth .pathway utilization is illustrated in the schematic shown in fig .quantitative relationships between the production of small organic acids molecules and glucose and oxygen uptake rates are shown in the remaining panels of fig .three - dimensional scatterplots for the production rates of formate , acetate , and ethanol are shown in figs .[ fig3]b , [ fig3]d , and [ fig3]f respectively , with projections into the three possible two - dimensional planes shown in figs .[ fig3]c , [ fig3]e , and [ fig3]g respectively .figure s3 in si gives results for lactate .as the levels of glucose and oxygen uptakes are raised , metabolic phenotypes can achieve an increased production of formate , acetate , and ethanol , even though the majority of feasible phenotypes remain at low organic acids production values .due to the high - growth requirement , oxygen uptake is always high but its variability increases with glucose uptake increase around a value of approximately mmol/(gdw ) , which clusters the majority of high - growth metabolic phenotypes .interestingly , this oxygen uptake rate value marks a region in the ffp space with maximum potential production rates of formate , acetate , and ethanol . above and below that valuemost states are concentrated in the range 12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )it was proven [ 1 ] that , by iterating steps ( 1 - 4 ) of the hit - and - run algotihm to sample the space of feasible metabolic flux solutions " section in the main text , the samples obtained are asymptotically unbiased , in the sense that the whole ffp space is explored with the same likelihood , in the limit of very large samples . in practice , one must always work with a finite sample , and hence we have taken some additional measures to ensure that our samples were truly representative of the whole ffp space .in particular : 1 . only one every points generated by hr was included in the final sample .this effectively decreases the `` mixing time '' of the algorithm , since the correlation among the points that are actually retained decays fast .different initial conditions were used .results showed no dependence on the initial condition , as expected for large samples .even so , the first 30% of points was discarded , in order to rule out any subtler effect of the initial condition on the final results .results were recalculated using subsamples of size 10% of the original sample .we did not find any qualitative difference between the two sets . because the hr algorithm is very efficient itself and due to the dimensionality reduction that our implementation adds ( see [ 2 ] for details ), we were able to generate very large samples in reasonable time .for each model , we initially created samples of size , giving rise to a final set of feasible solutions , uniformly distributed along the whole ffp space . froma given one .,title="fig:",scaledwidth=60.0% ] from a given one .,title="fig:",scaledwidth=60.0% ], and planes , is seen to span a wide range of values . finding the eigenvectors of the correlation matrix , one can see that such points are actually clustered around a plane ( plotted as a yellow grid ) . by diagonalizing the ( ) correlation matrix , we find the three vectors ( plotted in blue , red and green , respectively ) identifying the direction in space where the points show most variation , in a decreasing manner. we also plot as reference eccentric point a black square . * ( b ) * by projecting the sampled ffp along vectors , , and , all points are squeezed in a thin region close to the ( , ) plane .this shows that the greatest variability of the sampled points actually occurs in the , directions . in this representation ,the eccentric black square point is seen to lie far from the plane with a large coordinate .( _ follows to next page _ ) ], comprising 3 metabolites ( , , ) and 6 reactions ( , ) .* ( b ) * the four elementary flux modes ( efms ) of the network are simple paths connecting metabolic inputs to outputs .their vectorial representation is also shown on top of each graph .note that these modes depend on the stoichiometry of the network and do not capture , per se , reactions being capped somehow . *( c ) * visualization of the ffp space and efms .since the system features reactions and metabolites , the ffp space has dimension in this case and can be thus visualized in a 3d basis spanning the kernel of the stoichiometric matrix .( _ follows to next page _ ) ][ cols="^,<,^,^,^",options="header " , ]
experimental and empirical observations on cell metabolism can not be understood as a whole without their integration into a consistent systematic framework . however , the characterization of metabolic flux phenotypes is typically reduced to the study of a single optimal state , like maximum biomass yield that is by far the most common assumption . here we confront optimal growth solutions to the whole set of feasible flux phenotypes ( ffp ) , which provides a benchmark to assess the likelihood of optimal and high - growth states and their agreement with experimental results . in addition , ffp maps are able to uncover metabolic behaviors , such as aerobic fermentation accompanying exponential growth on sugars at nutrient excess conditions , that are unreachable using standard models based on optimality principles . the information content of the full ffp space provides us with a map to explore and evaluate metabolic behavior and capabilities , and so it opens new avenues for biotechnological and biomedical applications .
quantum key distribution ( qkd ) is a technology that provides a practical way to distribute a secret key between two distant parties using quantum physics and without making any assumptions on a potential eavesdropper s power .such a level of theoretical security can not be achieved using classical protocols .recently , the study of the practical security of qkd systems has attracted a lot of interest from the scientific community ( see for example , ) .indeed , deviations between the theoretical description of a qkd protocol and its implementation , open security loopholes that can be exploited by an eavesdropper . demonstrations of partial or full eavesdropping against commercial discrete - variable qkd systems have been performed .so far such hacking attacks were compiled on discrete variable systems as they were the only ones available at that time .however , recently a commercial qkd system using continuous variables ( cv ) , that features secure distances comparable to commercial discrete - variable qkd systems , was released .while the theoretical security of cv - qkd protocols has been established , the study of practical security of cv - qkd devices is far from sufficient ( see for example , ) .this is mostly due to the relative youth of the technology .recent work includes the extension from discrete - variable qkd to cv - qkd of an attack ( and solution ) that exploits the wavelength dependency of fiber beam splitters .however , this attack was limited to the case where bob performs heterodyne detection , i.e. , he measures both quadratures of the electromagnetic field simultaneously . in this paper, we propose another wavelength dependency attack ( along with a solution ) but this time one that can be applied to a cv - qkd system using homodyne detection .such a system also corresponds to those that are currently commercially available . in ref . , an attack targeting the local oscillator calibration routine of a cv - qkd system was proposed together with a family of countermeasures that consisted in measuring the shot noise in real time .we propose and provide experimental evidence of a wavelength attack targeting the real - time shot noise measurement procedure proposed in ref . . by inserting light pulses at different wavelengths ,this attack allows the eavesdropper to bias the shot noise estimation even if it is done in real time .based on experimental evidence , we discuss the feasibility of this attack and suggest a prevention scheme by improving the previously proposed countermeasures . in sec .[ background ] , we first recall the basics of a cv - qkd scheme based on a gaussian modulation of coherent states and homodyne detection .we present in detail how the relevant quantities , used to estimate the secret key rate of the protocol , are computed and tackle the problem of the shot noise evaluation procedure .then , we give the principle of the attack proposed in and the associated countermeasures . in sec .[ hack ] , we explain how the wavelength dependency of the fiber beam splitter at the receiver s side can be exploited to bypass the real - time shot noise measurement countermeasure and detail the various steps of our attack . in sec .[ fea ] , we study the practical feasibility of our scheme based on experimental values .finally , we show in sec .[ countermeasure ] how to improve the real - time shot noise measurement technique in order to detect our attack .the conclusion is given in sec .a typical cv - qkd system using homodyne detection can be realized using the schematic given in fig .[ homodyne ] . in this scheme ,the weak signal and strong local oscillator are generated from the same coherent state pulse by a beam splitter .the signal is then modulated randomly following a gaussian distribution with variance and zero mean in both quadratures , by using phase and amplitude modulators .the signal and local oscillator are separated in time and modulated into orthogonal polarizations using a polarization beam splitter before being inserted into the channel .when these pulses arrive at bob s side , bob randomly selects or in order to measure either the or quadrature , respectively . after measuring ,either direct or reverse error reconciliation ( alternatively , postselection ) protocols are performed in order to recover a common shared key .this is then followed by privacy amplification to reduce the eavesdropper s ( eve ) knowledge to an arbitrary small amount .homodyne detection plays a key role in cv - qkd implementations . to illustrate the new wavelength attack scheme ,let us first review the physical description of the homodyne detection .note that a more detailed explanation can be found in appendix [ appendix 1 ] .here we assume that both the signal and the local oscillator are coherent states .the signal state is denoted as and the local oscillator is denoted as .the specific quadrature of the signal is related to bob s modulated phase and the substraction of the detector outcomes .this can be expressed as here and are the photocurrents recorded by detector 1 and detector 2 , respectively ; is the efficiency of the detectors ; is the quadrature of the signal and is the quadrature of the vacuum state .when , and when , . a clock signal , which is generated by the local oscillator in a practical cv - qkd system ( see fig .[ homodyne ] ) , is necessary for maximizing the output of the homodyne detection .however , it opens a potential loophole for the eavesdropper . in ref . , the local oscillator calibration attack was proposed , in which eve modifies the shape of the local oscillator pulse in order to induce a delay to the clock trigger . as a result ,the homodyne detection outcome will drop down after such a delay due to the circuit design , which results in a decrease of the detection response slope , i.e. , between the variance of the homodyne measurement and the local oscillator power .the value of the shot noise will be overestimated and consequently the excess noise present will be underestimated .hence , eve s presence will be underestimated . to prevent this attack, bob can apply real - time shot noise measurements , which consists of two types of implementations . in this paper, we concentrate on the first one as shown in fig .[ homo2 ] . in this scheme, an amplitude modulator is added on the signal path .bob randomly applies attenuation ratios and by the amplitude modulator to measure the shot noise level in real time .the measurement results will be directly used to estimate the shot noise in the data processing that follows .fiber beam splitters are one of the key components in an all - fiber qkd system .the most widely used technology in making fiber beam splitters is the so - called fused biconical taper technology . as is described in ref . , the coupling ratio of fused biconical taper beam splitter varies with the wavelength of the input light . for sufficing different requirements, there are three types of fused biconical taper beam splitters : the single wavelength type , the wavelength flatten type and the double wavelength type . compared to the first two ,the double wavelength type fused biconical taper beam splitter is more popular commercially because of its relatively stable performance in a wide wavelength range .even so , it does not mean that it is totally wavelength independent .we experimentally tested two double wavelength type ( reflection / transmittance ) and a fused biconical taper beam splitter in our laboratory .the relationship between their coupling ratios and wavelengths is shown in table i. [ thorlabs ] .the transmittance of thorlabs double wavelength type beam splitter and a beam splitter under different wavelengths ( nm ) . [cols="^,^,^,^,^,^,^",options="header " , ]in this section , a hacking scheme on a cv - qkd system using homodyne detection is proposed . before introducing our scheme , two facts should be noted .first , in the improved cv - qkd scheme shown in fig . [ homo2 ] , bob does not need to measure the intensity of the local oscillator because the shot noise level can be directly measured in real time . on the other hand ,very low light intensity is enough to trigger the clock . in this case, eve can hack the system by only utilizing the wavelength dependent character of the fused biconical taper beam splitter .a full wavelength attack scheme is proposed for this situation . moreover , even if bob monitors the local oscillator intensity , by combining the local oscillator calibration attack with the wavelength attack idea , eve can still successfully achieve all of the secure key information without being discovered .our attack scheme can be divided into two parts : attack part 1 and attack part 2 . in this attack eveperforms a full intercept - resend attack . for this purpose, she measures the information sent from alice by performing heterodyne detection on both the signal and the local oscillator .after which she obtains two quadrature values and . according to these measurement results , she prepares a new signal and local oscillator and sends them to bob . in this stage, two strategies can be used . _ strategya : _ suppose that bob does not monitor the local oscillator intensity . instead of preparing a signal state of amplitude along with a local oscillator of amplitude as in the regular intercept - resend attack , eve chooses a real number larger than 1 and prepares a signal state of amplitude along with a local oscillator of amplitude .the pulses are separated in time and orthogonal polarizations , as the original pulses were , and then sent onto bob . in this strategy ,bob measures the quadratures with a variance of and a realistic shot noise of , where is the excess noise in units of , and ( see appendix [ appendix 1 ] for details ) is the shot noise variance without the attack .the excess noise bob estimates is equal to $ ] .if he still uses as the shot noise unit , the excess noise he estimates can be made arbitrarily close to zero for certain channel efficiencies by choosing the proper .for instance , by choosing typical values such as , and = 10 , the excess noise estimated by bob is .it reaches zero when , corresponding to db loss or about km of optical fiber link ( with loss assumed to be db / km ) .thus entirely compromising the security of the protocol . _ strategy b : _suppose that bob monitors the local oscillator intensity and its linear relation with the shot noise . andeve performs the local oscillator calibration attack as proposed in ref . . in this strategy ,eve controls the slope of the homodyne detection response by calibrating the trigger time . according to the analysis in ,the excess noise estimated by alice and bob is close to zero when the realistic shot noise is reduced by of the original level and .both of these strategies alone can not pass the protection test proposed in ref . ( fig .[ homo2 ] ) . under this technique, bob can easily monitor the shot noise level in real time , therefore he can modify the parameters immediately to fully protect against the above attacks . in order tonot be discovered , eve should take one more step to keep the counter - measurement results normal .for this purpose , the wavelength dependent character of fused biconical taper beam splitter is utilized to nullify the protection measurement in the second part of the scheme . in this attack ,eve prepares and resends two extra coherent state pulses with wavelengths different from the typical communication wavelength of nm . one of them is modulated the same polarization as the signal and the other with the local oscillator .so that when they reach bob s side , one goes into the signal path and the other goes into the local oscillator path .let us denote these pulses and also their intensities as and .eve randomly chooses the wavelengths of and from one of the following two sets : where denotes the transmittance of the fused biconical taper beam splitter corresponding to the different wavelengths ( see table 1 ) .as the transmittances are deviated from , an extra differential current proportional to the light intensity will appear in the final results .when bob applies strong attenuation ( ) on the signal , the extra differential current is primarily contributed by .this extra contribution is equal to or plus shot noise ( cf .( [ photocurr2 ] ) for details ) , where denotes the detector efficiency corresponding to the different wavelengths . asthis contribution should have zero statistical average and positive variance , eve must ensure that and choose and with equal probability . in this case , the variance is approximately equal to .therefore eve should make for strategy a and for strategy b , in order to make the shot noise measurement results seem normal . on the other hand ,when bob applies no attenuation ( ) on the signal , the extra differential current comes from both and .similarly , the differential current introduced by is or plus shot noise .eve makes and chooses and with equal probability . for convenience , we summarize the notations defined above as follows : by making , the contribution from will cancel the contribution from except for a small amount of shot noise , which keeps the influence to the quadrature measurement results at an acceptable level . a more rigorous analysis taking the shot noises into account is described in sec .[ fea ] and appendix [ appendix_2 ] .in this section , we analyze bob s estimated excess noise under the two kinds of attacks proposed in sec .[ hack ] . for simplicity , we take ( ; ) , , and the intensity of the local oscillator ( in units of photo - electron number ) .let us analyze the measurement outcomes corresponding to and .first though , let us briefly review the method of estimating the excess noise in cv - qkd . by denoting as the quadrature modulated by alice ( or ) and as the quadrature measured by bob ( or ), we note that here is the channel transmittance , is the modulation variance , is the excess noise , is the shot noise , is the efficiency of homodyne detector and is the electric noise ( all expressed in their respective units ) . among these parameters , and pre - known as the system parameters , is estimated by the local oscillator intensity from , and the others are estimated from alice and bob s correlated variables . the excess noise can then be estimated as in a later protection scheme given in ref . , two attenuation ratios , and , are introduced on the signal path .typically we set for shot noise estimation and for quadrature measurements .the variance of should be expressed as we can then estimate the parameters by /\tilde{n}_0 .\end{array}\ ] ] from now on , we denote as a constant value ( that is , the shot noise value when the system runs normally ) and and as the estimation values . let us now analyze how large and could be under the two different attack strategies ._ strategy a : _ the differential current at the output of the homodyne detection can be considered as the summation of and , which present the contributions from part 1 and part 2 of our attack scheme respectively .that is , , where the index denotes that bob applies the attenuation ratio . in strategya , can be obtained ( cf .( [ xpapp ] ) ) by taking , and its variance can then be computed as + r_i\eta\eta_{ch}\xi n_0 + v_{el}\\ & = r_i\eta\eta_{ch}(v_a + 2 + \xi)n_0 + \frac{n_0}{n } + v_{el}. \end{array}\ ] ] for , we derive its variance in appendix [ appendix_2 ] ( cf .( [ vpart2i ] ) ) as follows thus the total variance is given by we can now get the estimations about the shot noise level and excess noise under strategy a to be /\tilde{n}_0 . \end{array}\ ] ] by choosing proper intensities , and , eve can make and arbitrary close to zero . for this purpose , we take , for example .assume , simple calculations show that eve can choose , , , and , which are orders of magnitude smaller than ._ strategy b : _as long as eve can change the slope of the homodyne detection response by calibrating the trigger time , the excess noise will be close to zero .let us assume the realistic shot noise is .it is easy to derive that + v_{el } , \end{array}\ ] ] and is the same as in strategy a. here and are parameters chosen by eve , and she should make in order to keep the estimated parameters normal . therefore + v_{el}\\ & ~~ + ( 1 - r_i)^2d^2 + ( 35.81 + 35.47r_i^2)d .\end{array}\ ] ] the shot noise level and excess noise under attack 2 can then be computed to give /\tilde{n}_0 . \end{array}\ ] ] by choosing proper intensities , and , eve can make and arbitrarily close to zero .let us take and , for example .again a simple calculation shows that , by choosing , , , and , we again get about orders of magnitude smaller than as in strategy a. finally , we note that the intensities of the pulses in part 2 will affect the local oscillator intensity measurement .this effect is small due to the low strength of the pulses in part 2 , and eve can fully compensate it by decreasing the local oscillator intensity in part 1 and carefully calibrating the trigger time .in the former proposed scheme in the real - time shot noise measurement regime , only two attenuation ratios and are applied on the signal path , and we have already shown that this is not enough to detect the wavelength attack .in fact , in that case , according to eqs .( [ vsa ] ) and ( [ vsb ] ) , the total noise can be written as a second - order polynomial of the attenuation ratio : where ( ) is the signal on the detection caused by the attack signal going through the signal path ( local oscillator path ) . the shot noise measurement procedure in ref . assumes that is a linear function of and this is why it is defeated by the wavelength attack , which uses the term to compensate for the terms and when .the countermeasure can be modified to thwart the wavelength attack by allowing bob to use a third attenuation ratio , thereby observing for three values of . this way the three coefficients of the polynomial can be obtained .the coefficient in front of should be 0 in an ideal setting . to avoid the wavelength attack ,it is enough that alice and bob ensure that . indeed , in that case , hence , ( since is not small compared to ) .as a result , , and it is not possible anymore to compensate with . for instance , bob can randomly apply attenuation ratios , and to the amplitude modulator , with probabilities of , and respectively . as has been pointed out in ref . , this countermeasure has an impact on the overall key rate since some pulses are attenuated . in our example , assuming that of the pulses that are attenuated are discarded , the final key rate is the same as in ref . .it is worth noting that applying randomly several attenuation ratios on bob s side allows us to check the transmittance linearity with respect to the attenuation ratio in the same way as we do for the noise .this allows us for instance to defeat saturation attacks that rely on non - linearities of the detection apparatus .therefore this countermeasure defeats all currently known attacks on the detection apparatus of gaussian cvqkd , and is expected to constitute a strong defense against variants of these attacks .in addition to the procedure above , physical countermeasures such as adding wavelength filters before detection ( to ensure that the wavelengths used for the attacks are close to the system wavelength , which forces the attacker to use high - power signals ) , and a monitoring of the local oscillator intensity ( to detect these high - power signals ) are also suggested .in conclusion , we proposed two strategies to realize a wavelength attack targeting a practical cv - qkd system using homodyne detection . by inserting light pulses at different wavelengths , with intensities lower than the local oscillator light by three orders of magnitude, eve can bias the shot noise and the excess noise estimated by alice and bob .in other words , eve can tap all of the secure key information without being discovered .the real - time shot noise measurement scheme as proposed in ref . can not detect this type of attack .however , it can be improved by using three attenuation ratios to successfully fix this security loophole .moreover , other physical countermeasures , such as adding additional wavelength filters and monitoring the local oscillator intensity , are also suggested .we thank xiao - tian song and yun - guang han for providing the test data .this work was supported by the national basic research program of china ( grants no .2011cba00200 and no .2011cb921200 ) , national natural science foundation of china ( grants no .60921091 and no .61101137 ) .p. j. and s. k .- j .acknowledge support from the french national research agency , through the hipercom ( 2011-chri-006 ) project , by the direccte ile - de - france through the qvpn ( feder-41402 ) project , and by the european union through the q - cert ( fp7-people-2009-iapp ) project . c. w. acknowledges support from nserc .when a signal , described by the annihilation operator , is inserted to a photodetector with an efficiency of , the measured annihilation operator becomes , where denotes the vacuum mode .the input photons are converted to an electric current with strength , where is a constant amplification factor and represents the number of electrons . without loss of generality , we set for simplicity . in general , we can calculate the variance of as \\ & = \eta^2\alpha^4_{lo}(2t-1)^2 + \eta\alpha^2_{lo}[1-\eta+\eta(2t-1)^2]\\ & ~~+ 4\eta^2\alpha^2_{lo}t(1-t)(\langle x^2_{\phi}\rangle + 1 ) \end{array}\ ] ] finally , the differential current introduced by excess noise and electric noise should be added . hence , the total output current is with a variance of .v. scarani , h. bechmann - pasquinucci , n. j. cerf , m. duek , n. lutkenhaus , and m. peev , _ rev .phys . _ * 81 * , 1301 ( 2009 ) .b. qi , c. -h .f. fung , h. -k .lo , and x. ma , _ quant . inf .comp . _ * 7 * , 73 - 82 ( 2007 ) . c. weedbrook , a. m. lance , w. p. bowen , t. symul , t. c. ralph , and p. k. lam , _ phys .lett . _ * 93 * , 170504 ( 2004 ) ; c. weedbrook , a. m. lance , w. p. bowen , t. symul , t. c. ralph , and p. k. lam , _ phys .a. _ * 73 * , 022316 ( 2006 ) .
imperfect devices in commercial quantum key distribution systems open security loopholes that an eavesdropper may exploit . an example of one such imperfection is the wavelength dependent coupling ratio of the fiber beam splitter . utilizing this loophole , the eavesdropper can vary the transmittances of the fiber beam splitter at the receiver s side by inserting lights with wavelengths different from what is normally used . here , we propose a wavelength attack on a practical continuous - variable quantum key distribution system using homodyne detection . by inserting light pulses at different wavelengths , this attack allows the eavesdropper to bias the shot noise estimation even if it is done in real time . based on experimental data , we discuss the feasibility of this attack and suggest a prevention scheme by improving the previously proposed countermeasures .
the purpose of this letter is to remark on the well cited research article , where the following tri - trophic population model originally proposed in is considered , this model is based on the leslie - gower formulation , and considers the interactions between a generalist top predator , specialist middle predator , and prey , where are solutions to the above system - .the model is very rich dynamically , and has led to a number of works in the literature . in various theorems on the boundedness of the system - are proved , and the existence of an invariant attracting set is established . in particular , we recall the following result ( theorem 3 ( 2i ) , ( 3i ) ) [ thm : aziz ] consider the model - . under the assumption that all solutions to - are uniformly bounded forward in time , for any initial data in , and they eventually enter a bounded attracting set .furthermore system - is dissipative .note , is explicitly defined in .also condition is equation ( 7 ) in .the form of is different from equation ( 7 ) in , as in different constants have been used , than what we are currently using .however it is a matter of simple algebra to convert equation ( 7 ) from to our setting .our aim in the current letter is to show that ( theorem 3 ( 2i ) , ( 3i ) ) is incorrect . in particularwe show , \1 ) solutions to - are not bounded uniformly in time , even if condition from theorem [ thm : aziz ] is met .furthermore , solutions to - can even blow - up in finite time for large initial data .thus there is no absorbing set for all initial conditions in , and system - is not dissipative in , even under condition .\2 ) similar results hold for the spatially explicit model .\3 ) the above results can be validated numerically .we choose parameters satisfying , and show that numerical simulations of - , and its spatially extended form , still lead to finite time blow - up .we state the following theorem [ thm : r1 ] consider the three species food chain model - . for blows up in finite time , that is as long as the initial data are large enough .first set .consider the following modification to system - , with solution , recall that , the solution to , blows up in finite time , at , and we have an exact solution for , for , given by , however , using this exact solution for , one can find an exact solution to via separation of variables .thus for .next for a given we choose s.t we can enforce to hold s.t . $ ]. implies ( here we assume , else it is an uninteresting case ) equivalently we have this of course is always possible for large enough. however , note that is a subsolution to .if , this is immediate as then .if , then we can assume , where we select s.t , and then choose in place of in .also ( with in place of in ) is a subsolution to , as long as holds .thus via direct comparison , and .since implies , it is immediate that the solution to will also blow - up , via direct comparison with solving with in place of .see figure [ fig : ode ] , for a simple graphical representation of this idea .thus we have ascertained the blow - up of system - , via direct comparison to the modified system - .this proves the theorem .we next state the following theorem [ thm : d1 ] the three species food chain model - , even under condition is not dissipative in all of . via theorem[ thm : r1 ] , there exists initial data in , for which solutions blow - up in finite time , and thus do not enter any bounded attracting set .thus system - is not dissipative .the essential error made in the proof in is in equation in .the derived bound for is inserted in an estimate for the sum of and . although it is true that is bounded , and enters an attracting set eventually , _ there is some transition time _ before this happens .if is chosen arbitrarily large , then this transition time can be made arbitrarily long .the key is for chosen large enough , during this transition time , we can enforce , for as long as it takes to bow up . in this case will also blow - up in finite time , ( by comparison to ) , and hence never enter any bounded attracting set .we now consider the following spatially extended version of - defined on . here and where and , the parameters in the problem as earlier , are positive constants .we can prescribe either dirichlet or neumann boundary conditions . is an open bounded domain in with smooth boundary . , and are the positive diffusion coefficients . the initial data assumed to be nonnegative and uniformly bounded on .the nonnegativity of the solutions is preserved by application of classical results on invariant regions ( ) , since the reaction terms are quasi - positive , i.e. the usual norms in the spaces , and are respectively denoted by since the reaction terms are continuously differentiable on , then for any initial data in or , it is easy to check directly their lipschitz continuity on bounded subsets of the domain of a fractional power of the operator , where the three dimensional identity matrix , is the laplacian operator and denotes the transposition . under these assumptions ,the following local existence result is well known ( see ) .[ prop : ls ] the system - admits a unique , classical solution on .if then here we will show that - , blows up in finite time .we will do this by looking back at the blow - up for in , and then using a standard comparison method .consider - , with initial conditions and strictly positive . by integrating the third equation of the ode system ,we have gives we prove that the function : vanishes at a time and since , then the solution will blow - up in finite time .since the reaction terms are continuous functions , then the solutions are classical and continuous and is sufficiently large , then there exists such that t < \frac{1}{r_{0}^{-}}-\frac{c}{2}t,\ \ \ \ \text{for all } t\in ( 0,\delta ) .\]]if is sufficiently large , then we can find such that this entails thus one has , but , andby application of the mean value theorem , we obtain the existence of some , , s.t .this implies the solution of - blows up in finite time , at , and by a standard comparison argument , the solution of the corresponding pde system - , also blows up in finite time .we can thus state the following theorem [ thm : tp1 ] consider the spatially explicit three species food chain model - . for , blows up in finite time , that is as long as the initial data are large enough . here .note the above argument easily generalises to the case , where thus one can also state the following corollary [ cor : c2 ] consider the three species food chain model - .even if solutions to - with certain initial data are not bounded forward in time .in fact the solution to can blow - up in finite time , that is as long as the initial data are large enough . here .we remark that the methods of this section can be directly applied to prove blow up in the ode case as well .however the earlier proof via theorem [ thm : r1 ] has the advantage , that we can explicitly give a sufficient condition on the largeness of the data , required for blow - up .also , not just the norm , but every norm , , blows up .this is easily seen in analogy with the equation , and an application of the first eigenvalue method .also note the blow - up times for the pde case are not to be confused with the blow - up times for the ode case .in this section we numerically simulate the ode system - , as well as the pde system - , ( in 1d and 2d ) , in order to validate our results theorem [ thm : r1 ] , theorem [ thm : d1 ] , theorem [ thm : tp1 ] and corollary [ cor : c2 ] . to this endwe select the following parameter range , these parameters satisfy condition , from . despite this , we see finite time blow - up . the systems are simulated in matlb r2011a . for simulation of the ode systemswe have used the standard routine which uses a variable time step runge kutta method . to explore the spatiotemporal dynamics of the pde system in one and two dimensional spatial domain ,the system of partial differential equations is numerically solved using a finite difference method .a central difference scheme is used for the one dimensional diffusion term , whereas standard five point explicit finite difference scheme is used for the two dimensional diffusion terms .the system is studied with positive initial condition and neumann boundary condition in the spatial domain , , where .note , our proof of blow - up , allows for dirichlet , neumann or robin type boundary conditions .simulations are done over this square domain with spatial resolution , and time step size .we next present the results of our simulations .in the current letter we have shown that the solutions to the system - , modeling a tri - trophic food chain can exhibit finite time blow - up under the condition from theorem [ thm : aziz ] , as long as the initial data is large enough .this is also true in the case of the spatially explicit model .thus the basin of attraction of the invariant set , explicitly constructed in , _ is not all of _ , as claimed in .furthermore system - _ is not dissipative _ in all of , also as claimed in . for a numerical valiadation of these resultsplease see figures [ fig : ode ] , [ fig : pde1 ] , [ fig : pde2 ] .however , the model posesses very rich dynamics , in the parameter region thus an extremely interesting open question is , what is the basin of attraction for an appropriately defined and constructed ? this is tantamount to asking , which sorts of initial data lead to globally existing solutions , under the dynamics of - , and the parameter range ? the same questions can be asked , in the case of the spatially explicit model .the present research of nk is supported by ugc under raman fellowship , project no .5 - 63/2013(c ) and iit mandi under the project no .iitm / sg / ntk/008 and dst under iu - atc phase 2 , project no .sr / rcuk - dst / iuatc phase 2/2012-iitm(g ) .upadhyay , r. k. , naji , r. k. , kumari , n. , dynamical complexity in some ecological models : effects of toxin production by phytoplanktons , nonlinear analysis : modeling and control , 123 - 138 , vol . 12 , no.1 , 2007 .
in a three species ode model , based on a modified leslie - gower scheme is investigated . it is shown that under certain restrictions on the parameter space , the model has bounded solutions for all positive initial conditions , which eventually enter an invariant attracting set . we show that this is not true . to the contrary , solutions to the model can blow up in finite time , even under the restrictions derived in , if the initial data is large enough . we also prove similar results for the spatially extended system . we validate all of our results via numerical simulations . rana d. parshad , nitu kumari and said kouachi
counting statistics is a scheme to calculate all statistics related to specific transitions in a stochastic system . in the counting statistics , a master equation with discrete statesis used to derive time - evolution equations for generating functions related to the specific transitions .the scheme has been used to investigate frster resonance energy transfer , and many successful results have been obtained .although the scheme is basically formulated for a system with a finite number of states , it is possible to use the scheme to investigate a system with an infinite number of states .however , as exemplified later , we have non - closed equations in general , so that it would be needed to develop approximation schemes suitable for specific systems . as a first step , it is important to check whether an approximation scheme for the counting statistics is available for the system with an infinite number of states or not . in the present paper ,we focus on dynamics in genetic switches .it has been shown that stochastic behavior plays an important role in gene regulatory systems , and there are many studies for the stochasticity in the gene regulatory systems from experimental points of view ( e.g. , see ) and theoretical ones ( e.g. , see ) .not only studies by numerical simulations , but also those by analytical calculations have been performed .some analytical expressions for the static properties , i.e. , stationary distributions for the number of proteins or mrnas , have already been obtained .in addition , in order to investigate the role of the stochasticity in genetic switches , dynamical properties , i.e. , switching behavior between active and inactive gene states , have also been studied .basically , such dynamical properties have been investigated by numerical simulations ( e.g. , see ) ; only for a simple system , analytical expressions for the first - passage time distribution have been obtained .the genetic switch is described by a master equation with an infinite number of states .hence , if we can use the scheme of the counting statistics in order to investigate the dynamical properties in the genetic switches , it will be helpful to obtain deeper understanding and intuitive pictures for the genetic switches .the aim of the present paper is to seek the applicability of the counting statistics in order to investigate the dynamical property in the genetic switches .it immediately becomes clear that a straightforward application of the counting statistics derives intractable non - closed equations . in order to obtain simple closed forms, we here employ an effective interaction approximation . as a result, we will show that the switching problem can be treated as a simple two - state model approximately .this result immediately gives us intuitive understanding for the switching behavior and the non - poissonian property .the present paper is constructed as follows . in sec .[ sec_model ] , we give a brief explanation of a stochastic model for the genetic switch . in sec .[ sec_counting_statistics ] , the counting statistics is employed in order to count the number of transitions in the genetic switch , and , as a result , a simple two - state model is derived approximately .the derived approximated results are compared with those of monte carlo simulations in sec .[ sec_results ] .section [ sec_conclusions ] gives concluding remarks .a gene regulatory system consists of many components , such as genes , rnas , and proteins . here , a simplified model is used ; mrnas are neglected for simplicity , and an activated gene assumes to directly increase the number of proteins .in addition , in the simplified model , a repressed gene can not produce any proteins .the above model has been used to investigate the switching behavior in previous works , and , for example , see for details of the model .we summarize the model studied in the present paper in fig .[ fig_model ] .the binding interaction is assumed to be a repressed one , and the gene is activated only when the regulatory proteins are not binding the gene .the proteins are produced from the gene in the active state with rate , and proteins are degraded spontaneously with rate .the regulatory proteins bind the gene with a rate function , where is the number of free proteins .for example , for a monomer interaction case , and for a dimer interaction case , where is a rate constant for the binding . is a rate constant with which the regulatory proteins are released from the repressor site of the gene .we here give short comments for the model from the viewpoint of experiments . using this simplified model, we can discuss the connection among the model parameters , the number of proteins , and the switching behaviors . while the number of proteins can be observed or estimated experimentally , as far as we know , there has not been an experimental technique to observe the attachment and detachment of the regulatory proteins directly .we hope that developments of single - molecule observations in future would enable us to give information about the switching dynamics .analytical treatments for the self - regulating gene system have been developed , and an exact solution is known for the monomer interaction case , i.e. , . in order to simplify the analytical treatments , an additional assumption has been used in some previous works ; i.e. , some of proteins are assumed to be inert when the gene state is active .the inert proteins can not repress the gene , and it is not degraded . for the monomer interaction case , there is only one inert protein ; the number of inert protein for the dimer interaction case is two , and so on .note that the assumption of the inert proteins does not have physical meanings ; this only simplify the analytical treatments ( for details , see ) .however , it has been shown that this assumption has little influence of the gene system , and then we employ the assumption in the present paper .let and be states in which there are free proteins for the active and inactive states , respectively .the probabilities for and at time satisfy the following master equations ; \nonumber \\ & + d [ ( n+1 ) p(\alpha_{n+1},t ) - n p(\alpha_n , t ) ] \nonumber \\ & - h n p(\alpha_n , t ) + f p(\beta_n , t ) , \label{eq_master_monomer_1_exact}\\ \frac{d p(\beta_n , t)}{dt } = & d [ ( n+1)p(\beta_{n+1},t ) - n p(\beta_n , t ) ] \nonumber \\ & + h n p(\alpha_n , t ) - f p(\beta_n , t ) , \label{eq_master_monomer_2_exact}\end{aligned}\ ] ] where and are probabilities for free proteins for the active and inactive states , respectively . as stated in sec. [ sec_introduction ] , the exact solutions for stationary distributions of the number of proteins have been derived , and those are expressed using the kummer confluent hypergeometric functions . for details , see . using the concept of the counting statistics ,it is possible to investigate dynamical properties , i.e. , all statistics for the switching behavior between the active and inactive states . in the present paper , as an example , we calculate the number of transitions from the inactive state to the active state .the generating functions for the transitions are immediately obtained from the master equations and . a brief explanation of the counting statistics is given in the appendix , and we here give consequences of the counting statistics . a probability , with which there are transitions from the inactive state to the active state during time , is denoted by .the generating function for is defined as where is a counting variable .the generating function gives all information related to `` inactive active '' transitions .according to the scheme of counting statistics , we split into restricted generating functions and , where and are the generating functions for the system in states and at time , respectively . using the scheme of the counting statistics ,we obtain the following time - evolution equations for the restricted generating functions and : \nonumber \\ & + d [ ( n+1 ) \phi(\alpha_{n+1},\lambda , t ) - n \phi(\alpha_n,\lambda , t ) ] \nonumber \\ & - h n \phi(\alpha_n,\lambda , t ) + \lambda f \phi(\beta_n,\lambda , t ) , \label{eq_cs_1_exact}\\ \frac{d \phi(\beta_n,\lambda , t)}{dt } = & d [ ( n+1)\phi(\beta_{n+1},\lambda , t ) - n \phi(\beta_n,\lambda , t ) ] \nonumber \\ & + h n \phi(\alpha_n,\lambda , t ) - f \phi(\beta_n,\lambda , t ) . \label{eq_cs_2_exact}\end{aligned}\ ] ] although eqs . and are similar to eqs . and , note that the final term in the right hand side of eq . has a factor .the factor is introduced in order to count the number of transitions , and we can count the number of transitions related to this term ( for details , see appendix ) . using the aboverestricted generating functions , the generating function is calculated as next , we introduce the following generating functions for and : it is straightforward to derive the time - evolution equations for the new generating functions and from eqs . and; \nonumber \label{eq_cs_1_exact_modified } \\ & - hz \frac{\partial \alpha(\lambda , z , t)}{\partial z } + \lambda f \beta(\lambda , z , t ) , \\\frac{d \beta(\lambda , z , t)}{dt } = & - ( z-1 ) d \frac{\partial \beta(\lambda , z , t)}{\partial z } \nonumber \\ & + hz \frac{\partial \alpha(\lambda , z , t)}{\partial z } - f \beta(\lambda , z , t ) .\label{eq_cs_2_exact_modified}\end{aligned}\ ] ] using the generating function and , the generating function is given by and therefore it is enough to solve the following time - evolution equations in order to calculate the generating function : where we define and .note that eqs . andcontain the derivative of with respect to .hence , the equations are not closed .if these terms are expressed simply using , we will have simultaneous differential equations written only by the generating functions and ; i.e. , we have closed equations and hence the obtained equations may be solved analytically . in the following analysis , an effective interaction approximation is employed , and we will show that the above statistics can be approximated by a simple two - state model . in the effective interaction approximation , the interaction function is replaced as a constant value . as shown in , the dependence of on makes it difficult to obtain analytical results , and it has been shown that the approximation gives qualitatively good results . replacing the interaction function as where is a constant , we obtain the following equations instead of eqs . and : note that eqs . andare written only by and .it means that the switching problem can be approximated as a simple two - state model _ if _ the effective interaction is chosen adequately .we here briefly explain the choice of the effective interaction using a simple example , i.e. , the monomer binding interaction case .for the monomer binding interaction , the interaction function is calculated as follows . in this case , the interaction function is . in order to obtain the effective interaction , the number of proteins replaced as the average number of proteins , i.e. , where is the expectation of the number of free regulatory proteins under a condition that the gene is in the active state ( conditional expectation ) .the conditional expectation can be calculated from the stationary distribution of the number of proteins .note that the generating functions and are reduced to generating functions for the stationary distribution of the number of proteins when . hence , as shown in , they are written as follows . , \\\beta(z ) \equiv & \lim_{t\to \infty}\beta(\lambda=1,z , t ) \nonumber \\ = & \left ( 1 + \frac{\tilde{h}}{f } \right ) a f[a-1,b-1,n(z-1 ) ] - \alpha(z ) % \nonumber \\ % & - \lim_{t \to \infty}\alpha(\lambda=1,z , t),\end{aligned}\ ] ] where and is the kummer confluent hypergeometric function , where .we , therefore , obtain by inserting eq . into eq ., the following self - consistent equation is derived : solving eq . , we obtain we finally comment on a solution of the simple two - state model ( eqs . and ) .the simple two - state model can be solved exactly , and the probability distribution for the number of `` inactive active '' transitions during time is explicitly written as follows : where , , and are modified bessel functions of the first kind .this expression immediately gives us the non - poissonian picture of the phenomenon .active '' transitions . ( a ) monomer binding interaction case .( b ) dimer binding interaction case . in each figure , filled circles and filled boxes are monte carlo results for time and , respectively .solid and dashed lines corresponds to approximated analytical results of eq . for time and , respectively ., title="fig:",width=264 ] + active '' transitions .( a ) monomer binding interaction case .( b ) dimer binding interaction case . in each figure ,filled circles and filled boxes are monte carlo results for time and , respectively .solid and dashed lines corresponds to approximated analytical results of eq . for time and , respectively ., title="fig:",width=264 ] in order to check the validity of the analytical treatments and the approximations , we here compare the analytical results with those of monte carlo simulations .the original genetic switch explained in sec .[ sec_model ] was simulated using a standard gillespie algorithm .the parameters used in the simulation are as follows : .note that these parameters were selected as one of the typical values used in the previous works .firstly , we consider the monomer binding interaction case . according to the discussions in sec .3.3 , the value of the effective interaction is calculated as .figure [ fig_results](a ) shows the results of the analytical calculations ( eq . ) and those of the monte carlo simulations .although there are quantitative differences , the results shows that the approximated two - state model captures the essential features of the phenomenon .next , we consider a dimer binding interaction case , i.e. , . in this case , the effective interaction is calculated as follows : as shown in , the effective interaction is obtained by solving the following self - consistent equation : we here numerically solved the self - consistent equation ( eq . ) , and the calculated value of the effective interaction is . using the calculated value , we depict the analytical results and the corresponding monte carlo results in fig .[ fig_results](b ) . from the comparison, we confirmed that the approximated two - state model is available even in the dimer binding interaction case .although results are not shown , we performed numerical simulations for other some parameters , and checked the validity of the analytical treatments .for example , even for parameter regions in which the probability distribution of the number of proteins has bistability , the approximation scheme works well .in the present paper , we studied an analytical scheme to extract information related to the dynamical behavior in genetic switches . using an effective interaction approximation ,a simple two - state model is obtained , and we confirmed that the two - state model captures the features of the phenomenon . note that in the analytical treatments , we did not neglect the stochastic properties of the system ( except for the effective interaction approximation ) ; i.e. , we can calculate all statistics for transitions approximately , including higher order moments. it could be possible to apply the above effective expression for the transitions between the active and inactive states to more complicated gene regulatory networks without loss of the stochasticity ; this would give us deeper understanding for the switching behavior of the gene regulatory systems including static , dynamical , and stochastic behaviors .in addition , the idea of the effective interaction may be similar to the mean - field approximation in statistical physics ; the interaction is replaced with the average .it may be possible to develop higher - order approximations using the analogy with the conventional approximation schemes in statistical physics ; this is an important future work .we discussed properties only in the stationary states , because the effective interaction approximation has been applied only for the stationary states at the moment ; the average number of proteins ( or higher moments ) should be estimated adequately , and it was calculated by using the analytical solutions for the _ stationary _ distributions of the number of proteins . recently, exact time - dependent solutions for a self - regulating gene have been derived .hence , it may be possible to extend the effective interaction approximation to non - stationary states .if so , the effective interaction would be time - dependent , and , at least numerically , it is possible to calculate various moments for the counting statistics for time - dependent systems .we expect that the simple description developed in the present paper is available for various cases , such as complicated regulatory systems and time - dependent systems , and that the description gives new insights for the regulation mechanisms and stochastic behaviors .this work was supported in part by grant - in - aid for scientific research ( nos .20115009 and 21740283 ) from the ministry of education , culture , sports , science and technology ( mext ) , japan .here , we give a brief explanation for the counting statistics for readers convenience ( for details , see . ) in the framework of counting statistics , the quantity of interest is the number of target transitions . it is needed to set multiple target transitions in the genetic switches , and the genetic switches have two states , i.e. , active and inactive states . in the following explanations , a simple setting , in which there is only one transition matrix and only one target transition ,will be discussed because it is straightforward to apply the following simple discussions to the genetic switches .let be a transition matrix .we here derive the generating function for counting the number of events of a _ specific _ target transition .denote the probability , with which the system starts from state and finishes in state with transitions from to during time , as . in order to calculate the probability , we here define a probability with which the system evolves from state to state , provided no transitions occur during time . by using the probability ,the probability is calculated as where denotes the convolution .this formulation means that an occurrence of the target transition is sandwiched in between situations with no occurrence of the target transition , and it is repeated times .next , we construct the generating function of the probability : that is , the generating function gives the statistics of the number of transition during time under the condition that the system starts from state and ends in state .the generating function satisfies the following integral equation and obeys the following time - evolution equation where . in order to show , we used the following two facts. firstly , the probability of no target transitions , , obeys where .secondly , the derivative of the convolution is given by using the generating function , we construct restricted generating functions as follows : where is a probability distribution at initial time . from and ,the restricted generating function satisfies and these equations should be solved with initial conditions .the summation of for gives the objective generating function for counting the number of events of the specific target transition .elowitz , a.j .levine , e.d .siggia , and p.s .swain , science * 297 * , 1183 ( 2002 ) .rao , d.m .wolf , and a.p .arkin , nature * 420 * , 231 ( 2002 ) .m. krn , t.c .elston , w.j .blake , and j.j collins , nature rev .genetics * 6 * , 451 ( 2005 ) .j. hasty , j. pradines , m. dolnik , and j.j collins , proc .sci u.s.a * 97 * , 2075 ( 2000 ) .m. sasai and p.g .wolynes , proc .sci u.s.a * 100 * , 2374 ( 2003 ) .hornos , d. schultz , g.c.p .innocentini , j. wang , a.m. walczak , j.n .onuchic , and p.g .wolynes , phys .e * 72 * , 051907 ( 2005 ) .xu and y. tao , j. theor .biol . * 243 * , 214 ( 2006 ) .d. schultz , j.n .onuchic , and p.g .wolynes , j. chem .phys . * 126 * , 245102 ( 2007 ) .v. shahrezaei and p.s .swain , proc .sci u.s.a * 105 * , 17256 ( 2008 ) .a.m. walczak and p.g .wolynes , biophy .j. * 96 * , 4525 ( 2009 ) .j. venegas - ortiz and m.r .evans , j. phys .a : math . theor . * 44 * , 355001 ( 2011 ) .
applicability of counting statistics for a system with an infinite number of states is investigated . the counting statistics has been studied a lot for a system with a finite number of states . while it is possible to use the scheme in order to count specific transitions in a system with an infinite number of states in principle , we have non - closed equations in general . a simple genetic switch can be described by a master equation with an infinite number of states , and we use the counting statistics in order to count the number of transitions from inactive to active states in the gene . to avoid to have the non - closed equations , an effective interaction approximation is employed . as a result , it is shown that the switching problem can be treated as a simple two - state model approximately , which immediately indicates that the switching obeys non - poisson statistics .
the reduction of particle size can be achieved under many different conditions . in particular , the multiple breakage of crystals occurs during distinct research and technological processes such as , for instance , milling or recirculation loops .it is , in fact , a very complex phenomenon in which the quantity known as fragment size distribution function ( fsdf ) is relevant . in consequence ,the problem of finding the time - dependent fsdf ( tdfsdf ) corresponding to an event of solid multiple breakage , as a function of the main macroscopic measurable variables is of key interest in many areas .for instance , there are experimental reports on breakage rates , and on size distribution during particle abrasion . the usual theoretical approach can be found , in a much complete form , in the work by hill and ng , and more recently in the study published by yamamoto et al . .the aim of the present work is to provide an alternative way to determine the tdfsdf in multiple breakage processes as a function of the relevant macroscopic variables .it is based on the use of a nonextensive statistical description , as it has been previously done in modeling the problems of fragmentation , cluster formation and particle size distribution .the original report on this particular version of the statistics is due to tsallis , who postulated a generalized form of the entropy that among other things intends to account for the frequent appearance of power - law phenomena in nature . on the other hand ,the time evolution of the fragment distribution is presented as obeying a fractal - like kinetics .then , the combination of the nonextensive statistics and the fractal kinetics will allow to derive the expressions for the volume and characteristic length of fragments in breakage events .the paper is organized in such a way that the next section contains a detailed derivation of the distribution functions .then , the section iii is devoted to present and discuss the application of these functions in order to fit with available experimental data and , finally the section iv is devoted to the conclusions .if progressive particle fragmentation takes place within a liquid environment , then quantities such as shear rate and viscosity can be some of the macroscopic variables mentioned above . on the other hand , since the multiple particle breakage involves the effect of inertial impact with container walls or with another particles , it is possible to assume that the fragment mass and the shear rate must appear in the distribution function .the effect of attrition also determines the fragment size , so we also need to take into account the influence of viscosity . besides , both inertial impact and viscosity depend of fragment concentration .all these variables appear in [ 1 ] as the main factors governing the fsdf .the model considers the following quantities * of the fragments .* rate ( gradient of the velocity ) * viscosity . * of fragments per unit volume . if the basic dimensions entering this problem are mass ( ) , length ( ) , and time ( ) ( all positive , as noticed ) ; then , according to the vaschy - buckingham theorem , the law relating all these variables can be transformed to one which will include only one dimensionless variable .this provides a method for computing sets of dimensionless parameters for the given variables even if the form of the equation is still unknown . }let us look at this statement core closely.the dimensions involved in the variables above listed are : ; ; ; .accordingly , a single dimensionless quantity defined from them could be thus , the problem of finding the distribution of volume ( mass ) , or fsdf , can be formulated as the derivation of the distribution function of the dimensionless variable .this can be accomplished using basic physics principles such as the second law of thermodynamics , which is nothing but a maximization of the system s entropy . however , the breakage is a phenomenon with long - range correlation among different parts of the system , and the use of the boltzmann - gibbs ( bg ) entropy is not suitable in this case .when long - range correlation is relevant , it turns out that the use of the so - called tsallis entropy [ 2 ] reveals to be convenient . in its continuous version ,the form of this entropy ( in units of the boltzmann constant ) is : in this expression , is the probability density function .the quantity is known as the `` degree of non - extensivity '' and , in principle , can take any real value . with the use of the lhpital rule , it is possible to verify that , under the normalization condition the limit when leads to the bg entropy . the search for a maximum of must include some constraints .one of them is , precisely , the normalization condition which is usual in the analysis of the bg case . in the integration ,the upper limit is the maximal value of the dimensionless variable , which corresponds to the maximal value of the mass of the fragments , , under stationary conditions for the remaining parameters .in other words , . the second constraint is not that usual . in this case , it is customary to impose the finiteness of the so - called -mean value , also named as the first - order -moment : and , then , the constrain condition should read the same limits of integration considered in ( 3)-(4 ) apply for the integral in the equation ( 2 ) .actually , integration limits should include a minimal fragment size that corresponds to the situation when the breakage process can not yield fragments of smaller dimensions .however , in order to simplify the treatment , our approach sets the lower integration limit as zero .under the constrains mentioned , the problem of finding the maximum of the entropy is no other than a lagrange multipliers one .so , we define the lagrange functional =s_q+\alpha\left ( \int_0^a p(\xi)\,d\xi-1 \right ) + \beta\left ( \int_0^a \xi\,p^{\,q}(\xi)\,d\xi-\mu \right ) , \ ] ] and demand the fulfillment of the result for the probability density function has the form ^{-\frac{1}{q-1}}.\ ] ] once the lagrange multipliers and are properly determined , the final expression for is : ^{-\frac{1}{q-1}}\!\!\!\!\!;\\ p(\xi)&=&\left ( \frac{2-q}{\mu}\right ) ^{\frac{1}{2-q}}\left [ 1-\frac{\xi}{a}\right ] ^{-\frac{1}{q-1}}.\end{aligned}\ ] ] during the process of continuous breaking , the concentration of fragments of a given size varies. therefore , we must consider as a time - dependent variable .this dependence can be considered as if the fragments were a given species originated during the process .therefore , the problem will be to determine the kinetics of the fragments . in a complex system ,a very general kinetic equation for a given species can be posed in the form of a `` fractal '' differential equation : where is the reacting coefficient and is a fractional time index .the solution of the equation ( [ kin ] ) is and defines a kind of `` weibull kinetics '' , and is a result of our conjecture about the variation of the total number of fragments .if we name as the total volume of the system , then the density of the fragments of the n species is if we set as the initial concentration , the crystal density , and the volume of the crystal fragment , it is possible to write and , in correspondence , the time evolution of our dimensionless variable will be -according to ( [ evol ] ) : within this context , the time - dependent probability density distribution function for the volume of the fragments can be written as : ^c;\mbox{\hspace{5.6cm}}\\ { \rm where}\mbox{\hspace{14.7cm}}\nonumber\\ \omega=\left ( \frac{2-q}{\mu } \right ) ^{\frac{1}{2-q } } \frac{\rho\dot{\gamma}n_0}{\eta}^{\!\!\!1/3 } ; \mbox{\hspace{5.8cm}}\\ b=\frac{q-1}{2-q } \left ( \frac{2-q}{\mu}\right ) ^{\frac{1}{2-q}}\frac{\rho\dot{\gamma}n_0}{\eta}^{\!\!\!1/3 } ; \mbox{\hspace{4.6cm}}\\ c=\frac{1}{1-q}. \mbox{\hspace{8.3cm}}\end{aligned}\ ] ] it is possible to notice that , written in the form ( [ chi - t ] ) , the time - dependent probability density function explicitly depends on the macroscopic measurable variables of the system .on the other hand , the integration of ( [ chi - t ] ) over the crystal volume , from zero to , defines the fraction of fragments with a volume size smaller that , in the system : if the volume of the largest fragment , , is taken as the unity , one may clearly see that ; because any particle in the system will have a volume smaller than the maximal one .this provides a condition for the normalization of the distribution , so one obtains ^{\frac{2-q}{1-q } } } { 1-[1-b e^{at^\nu}]^{\frac{2-q}{1-q } } } \label{fv}\ ] ] the distribution ( [ fv ] ) gives the fraction of the crystal particles that , at the time , have a volume smaller or equal to .now , if we look for a distribution in terms of the particle s characteristic length , , instead of volume , we state the cubic dependence of the particle volume with as .accordingly , .then , ^{\frac{1}{1-q}}. \label{dens}\ ] ] again , the length distribution function is defined as the integral of from zero to a given . a procedure analogous to that previously described leads to the normalized length distribution ^{\frac{2-q}{1-q}}}{1-[1-\left ( \frac{1}{\sigma^2}\right ) b e^{\kappa t^\nu}]^{\frac{2-q}{1-q}}}.\ ] ]in order to test our model we fit the results of the particle diameter distribution of phosphate minerals reported as an outcome of a commercial particle size analyzer ( see ref . ) . since the experimental information is not time - depending , we chose a stationary fitting procedure that uses the expression derived for the -normalized- distribution density in such a way that the proposed expression will contain three adjusting parameters , .the choice of and is due the presence of such unknown quantities as , , and the time - related exponents in the expression .then ; ^{\frac{1}{1-q}}. \label{fit}\ ] ] the normalized data and the resulting curve appear in the fig .1 . we used the wolfram mathematica `` nonlinearmodelfit '' package with the finite difference gradient method and the sole restrictions , and .the best fit parameters obtained are : , , and , with the goodness of the fit determined by the parameter with a value of .it is worth to remark that the particular value for the tsallis q - parameter in this fragment size distribution is below the reported by calboreanu et al . . conti andnienow reported on the abrasion experiments in a solution with a solid phase of nickel - ammonium sulphate hexa - hydrate crystals .the procedure included stoping the agitation at certain intervals .then , the abrasion fragments were separated from the liquid to determine the total mass abraded and to measure size distribution .the counted particles were grouped into a number of size ranges in order to give a clearer representation of the changes of size distribution with time .we have , for instance : m ( range 1 ) ; m ( range 2 ) ; m ( range 3 ) . assuming a constant shape factor , the total particle mass in each of the ranges was calculated ( see both table 1 and figure 1 in ref . ) .we use the kinetic model proposed in this work for fitting the mentioned results in ref . of the time - dependent distribution of abraded particle mass ( size ) .the expression used to fit the different data sections is derived from ( [ fit ] ) considering that the exact values of the particle size in each data point are not known and will be considered as parts of the adjustment .this leads to consider the exponents in the time - evolution law ( [ evol ] ) as new fitting parameters , , through the substitutions and ; considering and as another two fitting parameters ; ^{\frac{1}{1-q}}. \label{fitt}\ ] ] the figure 2 contains the results of the fitting of the total mass of the fragments ( curve and dots in black ) as well as the fitting of the data corresponding to the above mentioned three first particle size ( mass ) ranges appearing in table 1 of conti and nienow report ( in blue , red , and purple color , respectively ) .the fitting was carried out with the same package above referred .the conditions for the fitting procedure were set as : , ( in order to keep the distribution as a real quantity ) , , , and . in all cases the characteristic fitting index ranked above .the values obtained for the distinct parameters are the following .total mass ( black , diamonds ) : , , , , .first size range ( blue , circles ) : , , , , .second range ( red , squares ) : , , , , .third range ( purple , triangles ) : , , , , .it must be stressed that the values of -exponent here obtained constitute a validation of our hypothesis of fractal kinetics to describe the time evolution of the fsdf . on the other hand, it is possible to notice that a good fitting can be achieved by demanding that the non - extensivity parameter be a non - integer with values between and , as previously found .one readily notices that , the highest values of obtained are around , which is quite below the value of , and the value of reported in ref .the use of a non - extensive statistical description is a correct choice for deriving the fragment size distribution function arising from processes of multiple breakage of crystals in stirred vessels .this can be performed both under static and time - dependent conditions , with the introduction of a suitable fractal kinetics , and fractional power time evolution .the proposed distribution functions were tested by fitting available experimental data .we have shown that the statistics of multiple breakage phenomenon can be modeled via a tsallis entropy .on the other hand , we have obtained values of the magnitude of the fractal time exponent within the range between and .
a time - dependent statistical description of multiple particle breakage is presented . the approach combines the tsallis non - extensive entropy with a fractal kinetic equation for the time variation of the number of fragments . the obtained fragment size distribution function is tested by fitting some experimental reports . + * keywords * : particle breakage ; nonextensive statistics ; time dependence
random number generators are an important element in various cryptographic constructions .they are needed for generating keys , initialization vectors , and other parameters .the lack of randomness has devastating consequences for the security of cryptographic constructions and protocols .well known examples are the broken seeding of openssl pseudo - random generator in debian ( resulting in limited set of potential keys generated by openssl ) , fixed parameter used in sony playstation 3 implementation of ecdsa signing ( resulting in compromised private key ) , and many other .there are two types of random number generators : * ( non - deterministic / true ) random number generators use a non - deterministic entropy source together with some additional post - processing to produce randomness ; * pseudo - random number generators typically deterministic algorithms that produce pseudo - random data from a seed ( hence the seed must be random and unpredictable , produced by non - deterministic random number generator ) .various pseudo - random generators are standardized .for example , the list of approved random number generators for fips 140 - 2 can be found in .modern operating systems implement random generators and allow application to use them .linux uses crc - like mixing of data from entropy sources to an entropy pool and extracts random data via sha-1 from the entropy pool , freebsd uses a variant of yarrow algorithm , windows provides multiple prngs implementations via cng ( cryptography , next generation ) api with underlying generator compliant with nist sp 800 - 90 .new generation of processors often contain hardware random numbers generators suitable for cryptographic purposes .the unpredictability of random number generators is essential for the security of cryptographic constructions .good statistical properties , assessed by batteries of test such as nist , testu01 or diehard , are necessary but not sufficient for unpredictability of the generator .an interesting assessment and details of intel s ivy bridge hardware random number generator was published recently .since pseudo - random generators require seeding , usually multiple low - entropy sources are used to gather sufficient entropy into so - called entropy - pool for seeding or reseeding .low - entropy sources are various system counters and variables , such as cpu performance and utilization characteristics , the usage of physical and virtual memory , file cache state , time , process and thread information , etc .an extensive list of operating system variables used to fill the windows entropy pool can be found in .an example of relatively simple design of prng based on timings from a hard disk drive is described in .there can be various motivations for designing and implementing own pseudo - random number generator performance , a lack of trust to generators provided by operating system or other components ( a well known story of dual_ec_drbg ) or even those provided by cpu hardware , experimenting , additional features / properties not present in available generators , etc .all pseudo - random generators must be seeded . in case of windows operating systemwe can easily use various counters as low - entropy sources .we analyze the standard set of performance counters in windows operating system .our goal is to answer the following questions : * what counters are best suited as entropy sources , and how much entropy they provide ? * are these sources independent or correlated ? * are these sources sufficiently independent on the operating system state ( e.g. reboot or restoring from snapshot of a virtual system ) ?let us remind that counters are not used directly , they serve for seeding a pseudo - random generator .we do not expect the counters will satisfy some battery of statistical tests we measure primarily the entropy of individual counters and the mutual information of promising counters .section [ prelim ] defines some notions and it also justifies a method of preprocessing the counters .results of our experiments are presented in section [ result ] .we summarize the implications of the results and outline the possible further research in section [ concl ] .let be a discrete random variable with values .let denote the probability that attains the value , i.e. =p_i$ ] .a standard measure of uncertainty is the shannon entropy of a random variable .we denote it : where denotes a logarithm with base 2 , and we use a convention when necessary .the smallest entropy in the class of all rnyi entropies is a min - entropy : particularly , for any arbitrary discrete random variable .the min - entropy measures the uncertainty of guessing the value of in a single attempt .it is impossible to know exact probability distributions for most of the operating system counters .we sample the values of the counters in our experiments .after `` preprocessing '' ( see section [ ppcnt ] ) we use a maximum likelihood estimate , where each probability is estimated as ratio of observed values to the number of samples .there are better estimators of entropy with reduced bias , such as miller - madow , nsb and others . however , we think that simple estimate is sufficient for our classification of the performance counters .let be a sequence of values from a finite set .according to the previous paragraph , we use notations and for the entropy and the min - entropy , respectively , where the probabilities are estimated as follows : for all .there are two inherent problems when measuring the randomness of operating system counters .the first problem is that some counters , in fact , `` count '' .therefore , we get a sequence of increasing values depending on chosen sampling interval . calculating entropy based on unique values ( regardless of their non / uniformity ) yields always the same value for samples we get . in order to have meaningful entropy values, we have to limit the possible values / outcomes of the random variable ( for example by splitting counter into smaller chunks ) .the second problem is that counters values are presented as 64-bit integers and therefore the majority of bits are constant for any reasonable amount of time .this can distort the measured entropies and lead to unrealistic low estimates , when we use all bits of a counter .let us assume a fictitious 64-bit counter , where the last bit of is random and all other bits of are zero , i.e. counter can have only two different 64-bit values and .it is easy to see that .however , splitting into bits and calculating the entropy from each bit of the counter yields : ( if we divide the above mentioned two 64-bit values to bits we get 128 bits in total , where 127 are equal to zero and only one is non - zero ) . splitting into bytes yields : ( we get 16 bytes , where one is equal to and remaining 15 bytes are equal to zero ) .we deal with these problems by preprocessing the values by a transformation .let be a -bit vector .let be a positive integer such that divides .the transformation produces -bit output from -bit input : the transformation is chosen with the aim of simplicity .notice that a complex transformation can distort the estimates in the opposite direction even simple incremental counter transformed by a cryptographically strong hash function will look like ideal random source .the parameter can be interpreted as an appetite for getting as much randomness as possible from a single value .value of affects the estimated entropy . as noted earlier in the discussion on random counters vs. simple incremental counters , too high results in overestimating the entropy . on the other hand ,too small can be very pessimistic for example , we lose bits of random data for each value of a truly random counter . in order to deal with counters that incrementregularly we apply difference operator to the sampled counter values : trivially , the difference of random , independent values yields again random and independent values . applying the difference operator allows measuring the entropy of change between successive samples of the counter .since the difference operator does not do any harm to `` good '' counters we apply it to all counters .moreover , after applying ( i.e. is applied first and second ) we group the obtained values into bytes .we take bytes as values of the random variable we use to measure the entropy .to represent negative numbers , introduced by the difference operator , we use their absolute value with a sign at the most significant bit position .a testing environment consisted of virtual pc emulated in vmware workstation .the virtual pc was created with 2 vcpu , 2 gb ram and default settings of virtualization platform .we used clean installation of windows 7 , the desktop operating system with the largest share world - wide .we implemented our experiments using .net platform .we also applied sp1 and all available updates ( hot - fixes ) provided by microsoft . during experiments the virtual pc was idle , with no user interaction , i.e. only the sampling application and default services were running .our first step was to eliminate weak counters and select promising ones .we proceed as follows : 1 .enumerate all operating system s performance counters .we identified 1367 counters .2 . let us define a sampling round as follows : sample all counters and then wait for 20ms .3 . repeat the sampling round to collect 10000 values for each counter . eliminate counters that are constant ,i.e. the entropy is zero .this resulted in 273 counters .re - sample remaining 273 counters in 100001 sampling rounds .eliminate counters that are constant we got 266 counters after this elimination . + it is interesting that we have got counters non - constant in 10000 samples , but constant in 100001 samples .this anomaly was caused by some unknown rare event which happened while sampling the first 10000 samples and then never repeated during the first sampling and even the second , ten times longer , sampling period .this event influenced the following six counters ( they were constant before and after this event ) : * cache , read aheads / sec " increased * event tracing for windows , total number of distinct enabled providers " decreased * event tracing for windows , total number of distinct pre - enabled providers " increased * logicaldisk , split io / sec , _ total " increased * physicaldisk , split io / sec , _ total " increased * synchronization , exec .resource no - waits acqexcllite / sec , _ total " increased + the last `` strange '' counter was memory , system code resident bytes " ( see figure [ figc ] ) .this counter was oscillating in range from 2527232 to 2576384 for first 1643 samples and then settled down to the maximal value of the range ( 2576384 bytes 629 pages of 4 kb ) and never changed again .apply the difference operator to obtained values ( now we have 100000 delta - values for each counter ) .after eliminating counters with constant delta - values , we obtained 263 counters .apply the transformation for , and group individual results into bytes .thus we get for each counter four separate sequences of bytes ( 12500 , 25000 , 50000 and 100000 bytes for respectively ) .we denote these sequences for counter as , , and . + setting too high yields low entropy per bit values .for example when no counter have value of per bit . for counter have value of per bit , i.e. no counters fill be in upper triangle on figure [ fig1 ] .experiments imply that decreasing generally increases scaled entropy value per bit .eliminate counters that are constant for some .none of the counters were eliminated in this step .we call all remaining 263 counters `` green '' . the result of this process is summarized in table [ tab1 ]. .overview of counters elimination . [ cols= " <, > " , ] for further analysis we select all independent counters and one counter for each dependent group ( the counter with highest value of our combined entropy metric ) .this reduces the number of counters to , with total entropy bits per each 264ms this is a conservative estimate assuming ( time includes 20ms of sleeping and 13ms of collecting counters for each one of eight rounds ) .virtual servers and pcs ( vdi ) are increasingly common in everyday reality of corporate it .we try to answer a natural question about independence of selected counters in this environment .we created a snapshot of our ( virtual ) experimental windows 7 system . after resuming the system , sampling starts immediately .we collect data for three resumptions from the same snapshot . similarly to section [ mutual ] we compute mutual information , this time it is the mutual information between samples of the same counter .moreover , in order to focus on potential correlations in short time span , the mutual information is calculated using floating window of length 1400 half - byte samples .this window corresponds to roughly 3 minutes , for sampling interval 20ms and . before diving into more detailed analysis ,let us emphasize that the experiment showed mostly negligible mutual information of different runs of each counter . with exception of short spike of counter ( memory , pool nonpaged allocs ) ,all other mutual informations are way below 0.10 , the value we used to declare independence of two counters in section [ mutual ] .therefore , even starting sampling from the same snapshot will produce a sufficiently different content of entropy pool . as we have three runs for every counter ,we compute the mutual information between all three pairs of these runs .then the minimum , the average ( arithmetic mean ) and the maximum of all obtained values are calculated .for comparison , we created three runs of 14 random , uniformly distributed and independent counters , and we computed their mutual information minimum , average and maximum .the comparison of real and random counters revealed that there are two types of counters .the first group contains counters with mutual information statistics similar to random counters these are counters , for and we call them the upper group / counters .the second group of counters , for exhibits statistics even better than the random counters , therefore we call them lower group / counters .numerically , groups are divided by threshold value .if mutual information between all pairs of runs of the counter lies above the threshold , then the counter belongs in the upper group , otherwise it is in the lower group .the results for the selected 14 counters , divided into upper and lower groups , are presented in figure [ fig5 ] .as marked on y axis .upper and lower group counters are presented separately . ]unusual behavior of the lower counters at the beginning is caused by counter ( memory , pool nonpaged allocs ) and , to a lesser extent , counter ( memory , available bytes ) .other two counter from the lower group show stable behavior on entire interval . figure [ fig6 ] shows counter statistics we can observe a sudden increase of mutual information roughly after 2.5 minutes after resumption .( memory , pool nonpaged allocs ) . ]another `` strange '' counter is counter .the mutual information is higher than threshold in the first 3 minutes ( but still less than average of random counters ) .after that , the values drop below threshold and stay there .( memory , available bytes ) . ]the lower group has distinctively lower mutual information statistics than completely random counters .this makes these counters slightly `` artificial '' and probably suspicious .we analyzed windows 7 performance counters as potential sources of randomness .generally , more counters and other entropy sources you include into your entropy pool the better chance of having sufficient entropy when you need it .our experiments yielded 19 promising counters .the final selection consists of 14 counters with enough entropy for practical purposes .their analysis allows us to draw the following conclusions : * if your applications uses .net platform , some of the platform s performance counters are good entropy sources . * using 11 counters ( without .net counter ) yields bits per 240ms a conservative estimate assuming ( time includes 20ms of sleeping and 10ms of collecting counters for each one of eight rounds ) .adding three .net counters increases the entropy to bits per 264ms ( , time includes 20ms of sleeping and 13ms of collecting counters for each one of eight rounds ) . even without detailed analysis of unpredictability of these sourceswe can conclude that windows performance counters are viable option to feed randomness pool for prngs .* interestingly , most of top counters are mutually independent . a strong mutual dependence between counters is usually observed with obvious pairs ( e.g. available memory in bytes and kilobytes ) . *selected counters are robust entropy sources in virtual environment .independent runs of the virtual pc from the same snapshot showed mutual independence of each counter s samples ( with exception of short spike of counter ) .we did our analysis only in a virtual environment .we expect that the experiment in a host environment ( physical hardware ) would show comparable or even better results .certainly , the analysis can be extended further by more thorough experiments ( e.g. considering various time interval for sampling , exploring mutual dependence of higher orders , changing preprocessing of sampled counters ) , using better entropy estimators ( e.g. nsb estimator ) , studying the possibilities of influencing the predictability of performance counters etc .14 nist : recommendation for random number generation using deterministic random bit generators , nist special publication 800 - 90a , elaine barker and john kelsey , 2012 .nist : annex c : approved random number generators for fips pub 140 - 2 , security requirements for cryptographic modules , draft , randall j. easter and carolyn french , 2012 .nist : a statistical test suite for random and pseudorandom number generators for cryptographic applications , nist special publication 800 - 22 , revision 1a , andrew rukhin et al . , 2010 .the entropy device , section 4 special files , freebsd kernel interfaces manual ( available at http://www.freebsd.org/cgi/man.cgi?query=random&sektion=4 ) random.c a strong random number generator , linux kernel 3.2.10 source code ( available at http://lxr.linux.no/#linux+v3.2.10/drivers/char/random.c ) microsoft windows 7 cryptographic primitives library ( bcryptprimitives.dll ) security policy document , microsoft windows 7 operating system , fips 140 - 2 security policy document , version 2.2 , 2011 .( available at http://csrc.nist.gov/groups/stm/cmvp/documents/140-1/140sp/140sp1329.pdf ) microsoft windows server 2008 r2 kernel mode cryptographic primitives library ( cng.sys ) security policy document , microsoft windows server 2008 r2 operating system , fips 140 - 2 security policy document , version 2.3 , 2013 .( available at http://csrc.nist.gov/groups/stm/cmvp/documents/140-1/140sp/140sp1335.pdf ) p. lecuyer and r. simard : testu01 : a c library for empirical testing of random number generators .acm transactions on mathematical software , vol .33 , article 22 , 2007 .( available at http://www.iro.umontreal.ca/~simardr/testu01/tu01.html ) george marsaglia : diehard battery of tests of randomness , 1995 .( available at http://stat.fsu.edu/pub/diehard/ ) mike hamburg , paul kocher and mark e. marson : analysis of intel s ivy bridge digital random number generator .cryptography research , inc . , 2012 .( available at http://www.cryptography.com/public/pdf/intel_trng_report_20120312.pdf ) ilya nemenman , fariel shafee and william bialek : entropy and inference , revisited .advances in neural information processing systems 14 , mit press , 2002 .liam paninski : estimation of entropy and mutual information .neural computation 15 , pp . 1191-1253 , mit press , 2003 .net applications : desktop operating system market share , market share reports , july 2013 .( available at http://www.netmarketshare.com/ ) alfrd rnyi : on measures of entropy and information .fourth berkeley symp .stat . and probability , university of california press , pp .547 - 561 , 1961 .georg t. becker , francesco regazzoni , christof paar and wayne p. burleson : stealthy dopant - level hardware trojans , cryptographic hardware and embedded systems ches 2013 , lecture notes in computer science volume 8086 , springer , 2013 , pp 197214 .martin geisler , mikkel krigrd and andreas danielsen : about random bits , 2004 .( available at http://www.daimi.au.dk/~mg/mamian/random-bits.pdf )
the security of many cryptographic constructions depends on random number generators for providing unpredictable keys , nonces , initialization vectors and other parameters . modern operating systems implement cryptographic pseudo - random number generators ( prngs ) to fulfill this need . performance counters and other system parameters are often used as a low - entropy source to initialize ( seed ) the generators . we perform an experiment to analyze all performance counters in standard installation of microsoft windows 7 operating system , and assess their suitability as entropy sources . besides selecting top 19 counters , we analyze their mutual information ( independence ) as well as robustness in the virtual environment . final selection contains 14 counters with sufficient overall entropy for practical applications . * keywords : * entropy , windows performance counters , randomness , prng .
low - density parity - check ( ldpc ) codes have been designed recently based on finite geometries and , more generally , balanced incomplete block designs ( bibds ) , see , for example , . one of the main advantages of these structured ldpc codes is that they can lend themselves to very low - complexity encoding , as opposed to random - like ldpc codes .moreover , experimental results show that the proposed codes perform well with iterative decoding . in the first part of this paper , we present novel high - rate structured ldpc codes with constant column weights 3 and higher , based on cyclic bibds ( cbibds ) , resolvable bibds ( rbibds ) , and cyclically resolvable cyclic bibds ( crcbibds ) .we obtain several infinite classes of -regular ldpc codes with values of varying from 3 to 8 ( theorems [ thm_cdf - ldpc codes ] , [ thm_rbibd - ldpc codes ] , and [ thm_crcbibd - ldpc codes ] ) as well as one infinite class of -regular ldpc codes for any prime power ( theorem [ thm_rbibd - ldpc codes_prime ] ) , all admitting flexible choices of and the code length .the presented results are more general than previous ones .our proposed ldpc codes have good structural properties , their tanner graphs are free of short cycles of length 4 and have girth 6 .their code rate is high , achieving or higher already at small block lengths .this is typically the rate of interest in high - speed applications , for example , in magnetic recording and optical communications channels .most classes of the constructed codes have a quasi - cyclic ( or similar ) structure , which allows linear - complexity encoding with simple feedback shift registers .in addition , many classes exhibit very sparse parity - check matrices .experimental results on decoding performance show that the novel ldpc codes perform very well with the sum - product algorithm , significantly better than known bibd - ldpc codes and random gallager ldpc codes for short to moderate block lengths ( cf .[ table_simu1 ] ) .we observe furthermore , very interestingly , for the proposed codes a performance gain as the column weights grow larger , and a particular good performance of ldpc codes based on crcbibds . in the second part of the paper, we apply our combinatorial construction techniques to systematic repeat - accumulate ( ra ) codes .ra codes were first introduced in as a serial concatenation of a rate- repetition code and a convolutional code with transfer function , called the accumulator ( fig . [ encoder ] ) . between these two constituent codes , an interleaver permutes the output of the repetition code and in some cases , the encoding scheme is also refined by a rate- combiner .this combiner performs modulo-2 addition on sets of bits .here , we consider systematic ra codes ( abbreviated from now on as _ sra codes _ ) , where the message of length is concatenated with the output of the accumulator of length . for a detailed description of the encoding process, we refer the reader to .the scheme leads to a parity - check matrix of the form ] such that in . 6 . for all values of ( mod ) with the possible exceptions , 7 . for all values of ( mod ) with the possible exceptions .the proof relies on known infinite ( ( 1)-(5 ) ) and finite ( ( 6)-(7 ) ) families of s ( cf . and the references therein ; ) .we note that cases have been presented in . a -regular quasi - cyclic ldpc code of length 546 and rate at least 0.81 can be obtained from a .[ thm_rbibd - ldpc codes ] let be a positive integer .then there exists a -regular ldpc code of length and rate at least based on an for the following cases : 1 . for any positive integer , 2 . for any positive integer , 3 . for any positive integer with the possible exceptions given in table [ table_rbibd ] , 4 . for any positive integer with the possible exceptions given in table [ table_rbibd ] . the proof is based on known infinite series of s ( cf . and the references therein ) .we remark that cases and of have been considered in . a -regular ldpc code of length 1702 and rate at least 0.89 can be constructed from an .[ thm_rbibd - ldpc codes_prime ] if and are both powers of the same prime , then a -regular ldpc code of length and rate at least based on an exists if and only if ( mod ) and ( mod ) .it has been shown in that , for and both powers of the same prime , the necessary conditions for the existence of an are sufficient .[ thm_crcbibd - ldpc codes ] let be a prime. then there exists a -regular ldpc code of length and rate at least based on a for the following cases : 1 . for any positive integer , 2 . for any odd positive integer , 3 . for any positive integer such that , and furthermore 4 . for any positive integer satisfying the following condition : is not a -th power in , or equivalently is not a -th power in , where is the largest power of dividing and a -th primitive root of unity in , 5 . for any positive integer satisfying the following condition : there exists an integer such that divides and are -th powers but not -th powers in , where is a -th primitive root of unity in , 6 . for the values of ( mod ) given in table [ table_rdf ] .moreover , there exists a -regular ldpc code of length and rate at least based on a for the following cases : 1 . for , or , and is a product of primes of the form ( mod ) as in the cases above , 2 . and is a product of primes of the form with odd . the proof is given in appendix [ app_thm_crcbibd - ldpc codes ] .we note that cases and have been treated in . a -regular ldpc code of length 2091 and rate at least 0.90 can be obtained from a . for experimental results on decoding performance , we employed the iterative sum - product algorithm , as proposed in , on an awgn channel with a maximum of 50 iterations per codeword .[ table_simu1 ] shows the bit - error rate ( ber ) performance of ldpc codes that have been constructed using the different combinatorial techniques presented in this section , and random regular gallager ldpc codes .a legend displays the following information in the respective order : code type , construction method in brackets , and a quadruple ] , where the -th column of has at rows and , , given that these rows exist .thus , the design parameters specify the vertical distance between the and in the columns of .the -part is unaffected by the accumulator and hence remains as introduced in section [ intro ] .consequently , most of the columns of have the same ( high ) column weight as the columns of , leading to an improved decoding performance compared to sra codes .the incidence matrix of a cbibd has a quasi - cyclic structure ] is the value at position .now we compute =x_1+\delta ( i-1 ) \pmod{pk}\ ] ] and apply the permutation to the rows of the entire incidence matrix . herewe consider only the partial matrix corresponding to ( fig .[ crcbibd_trans ] ) .for it holds that 1 .there exists exactly one column with 1-entries at row and ( mod ) for every , and 2 .all are distinct .the column of with 1-entries at rows and gives the column of . because of the cyclic resolvability there must be a resolution class with a column that contains 1-entries at rows ( mod ) and ( mod ) , . with , we have and hence we never use a row twice .the columns result in of and thus we have for every . moreover , must be unique , because there can only exist one column with 1-entries at rows ( mod ) and ( mod ) , , due to the axioms of the design .the columns must be distinct because the blocks of can never have two pairs of entries with difference due to constructional reasons . thus , the resulting must be distinct .finally , we can transform into a double diagonal matrix by simple column reordering and deleting the superfluous ones .we can even obtain wqra codes with an arbitrary design parameter with the restriction that .for this , we modify our computation to =x_1+\delta ( i-1 ) \pmod{pk},\ 1\leq i \leq pk.\ ] ] in this section , we compare the decoding performance of sra codes , wqra codes and regular ldpc codes for our new constructions ( fig .[ table_simu2 ] ) .these codes differ solely in the -part of their parity - check matrices and thus , the largest differences in performance are expected for short to moderate block lengths or for lower code rates , as in these cases the impact of is relatively higher than that of . for the decoding we use the sum - product algorithm on an awgn channel with a maximum of 50 iterations per codeword .the ldpc codes are again described by a quadruple ] , where is the column weight of .the first plot of fig . [ table_simu2 ] demonstrates the performance gain of a new low - rate w3ra code at relatively small block length , compared to the sra and ldpc code of the same parameters . in the second plot ,our w5ra - code shows a similarly well low - snr performance as the corresponding sra code and the excellent high - snr performance as the ldpc code .the sra code suffers a high error - floor at approximately 3.75 db , arising from many weight-2 columns in the parity - check matrix .this error - floor turns out stronger compared to the first plot , since we reduced the column weights of the -part from 5 ( instead of 3 ) to 2 in order to obtain a double diagonal matrix .consequently , the larger decrease of the column weights negatively affects the decoding performance in the error - floor region .the third plot demonstrates the performance gain of our novel w3ra codes compared to the sra codes of johnson and weller , for various codes of rate 0.85 and block lengths varying from 1220 to 3020 .the fourth plot shows a particular good high - snr performance of our w5ra - code compared to the corresponding sra and ldpc code .we have designed in this paper new classes of structured ldpc codes with high code rate and low - complexity encoding . based on specific classes of bibds, we obtained several infinite classes of -regular ldpc codes with values of varying from 3 to 8 as well as an infinite class of -regular ldpc codes for any prime power , all admitting flexible choices of and the code length .we have furthermore addressed sra codes , proposing a generalized accumulator structure for higher column weights that replaces the conventional accumulator and leads to an encoding scheme , we have termed weight- ra code .this allowed an improved decoding performance closer to those of regular ldpc codes , along with the low encoding complexity of turbo - like codes .compared to sra codes , the relatively high error floors could be lowered significantly .the encoding scheme is applicable to our new construction techniques and may be used for further combinatorial constructions that lack an efficient encoding .the presented constructions arise from cyclic and resolvable bibds and allows to use an arbitrary number of block orbits or resolution classes for more flexibility in the code design .therefore , we can adjust the rate and length of the codes independently and thus produce a wider range of codes . the proposed novel ldpc and systematic ra codes in this paper offer good structural properties , perform very well with the sum - product algorithm , and are suitable for applications , e.g. , in high - speed applications in magnetic recording and optical communications channels .let be a prime .we first assume that is odd .then , by a result of genma , mishima and jimbo , a exists , whenever there is an .the following infinite ( ( i)-(iii ) ) and finite ( ( iv ) ) families of radical difference families are known ( cf . and the references therein ; ) : 1 .an exists for all primes ( mod ) .2 . let be a prime , let be the largest power of dividing and let be a -th primitive root of unity in . then an exists if and only if is not a -th power in , or equivalently is not a -th power in .3 . let be a prime and let be a -th primitive root of unity in .then an exists if and only if there exists an integer such that divides and are -th powers but not -th powers in .an exists for all primes displayed in table [ table_rdf ] .moreover , in a recursive construction is given that implies the existence of a whenever is a product of primes of the form ( mod ) .in addition , a has been shown to exist for any prime ( mod ) .we now consider the case when is even : in , a is constructed for any prime , where is an odd positive integer .furthermore , via the above recursive construction , a exists whenever is a product of primes of the form and is odd .the result now follows .let be the -th block of the difference family over with prime order . from the construction arises the following matrix \ ] ] where the -th column consists of the differences , and .note , that such an element must exist , since is a primitive element of .let denote the index of the column that contains entry 1 .the index indicates the position of the desired block containing the difference 1 .now , the following problems are equivalent : 1 .computation of the discrete logarithm 2 .computation of , such that .first , assume that the discrete logarithm is solved and thus is known .then , .conversely , assume that is known .let be the row index of the 1-entry in , such that .since there are only six possible rows , we can find in constant time .the solution of the discrete logarithm is then given by .the authors thank the anonymous referees for their careful reading and valuable insights that helped improving the presentation of the paper .y. kou , s. lin , and m. p. c. fossorier , `` low - density parity - check codes based on finite geometries : a rediscovery and new results , '' _ ieee trans .information theory _ ,47 , no . 7 , pp . 27112736 , 2001 .b. ammar , b. honary , y. kou , j. xu , and s. lin , `` construction of low - density parity - check codes based on balanced incomplete block designs , '' _ ieee trans .information theory _ ,50 , no . 6 , pp .12571268 , 2004 .alexander gruner is a ph.d .student in computer science at the wilhelm schickard institute for computer science , university of tbingen , germany , where he is part of an interdisciplinary research training group in computer science and mathematics .he received the diploma degree in computer science from the university of tbingen in 2011 .his research interests are in the field of coding and information theory with special emphasis on turbo - like codes , codes on graphs and iterative decoding .michael huber ( m.09 ) is a heisenberg research fellow of the german research foundation ( dfg ) at the wilhelm schickard institute for computer science , university of tbingen , germany , since 2008 . before that he had a one - year visiting full professorship at berlin technical university .he obtained the diploma , ph.d . and habilitation degrees in mathematics from the university of tbingen in 1999 , 2001 and 2006 , respectively .he was awarded the 2008 heinz maier leibnitz prize by the dfg and the german ministry of education and research ( bmbf ) .he became a fellow of the institute of combinatorics and its applications ( ica ) , winnipeg , canada , in 2009 .huber s research interests are in the areas of coding and information theory , cryptography and information security , combinatorics , theory of algorithms , and bioinformatics . among his publications in these areasare two books , _ flag - transitive steiner designs _ ( birkhuser verlag , frontiers in mathematics , 2009 ) and _ combinatorial designs for authentication and secrecy codes _( now publishers , foundations and trends in communications and information theory , 2010 ) .he is a co - investigator of an interdisciplinary research training group in computer science and mathematics at the university of tbingen .
this paper presents several new construction techniques for low - density parity - check ( ldpc ) and systematic repeat - accumulate ( ra ) codes . based on specific classes of combinatorial designs , the improved code design focuses on high - rate structured codes with constant column weights 3 and higher . the proposed codes are efficiently encodable and exhibit good structural properties . experimental results on decoding performance with the sum - product algorithm show that the novel codes offer substantial practical application potential , for instance , in high - speed applications in magnetic recording and optical communications channels . low - density parity - check ( ldpc ) code , systematic repeat - accumulate ( ra ) code , accumulator design , sum - product algorithm , combinatorial design .
our database of the electronic open market sibe ( sistema de interconexin burstil electrnico ) allows us to follow each transaction performed by all the firms registered at bme . in 2004the bme was the eight in the world in market capitalization .we consider the stocks banco bilbao vizcaya argentaria ( bbva ) , banco santander central hispano ( san ) , and telefnica ( tef ) .we also consider only the most active firms defined by the criterion that each firm made at least trades / year and was active at least days per year .the number of firms is ( bbva ) , ( san ) , and ( tef ) .these firms are involved in of the transactions .the investigated period is 2001 - 2004 .we do not consider other stocks because we have verified that the number of detected patches is too small to perform careful statistical estimation .the series under study is the series of signed traded value . for each firm and for each stockwe construct the series composed by all the trades performed by the firm with a value for a buy trade and for a sell trade , where is the value ( in euros ) of the traded shares .the method we use to detect statistically the presence of patches is adapted from ref . where it was introduced to study patchiness non - stationarity of human heart rate .the algorithm works as follows .one moves a sliding pointer along the signal and computes the mean of the subset of the signal to the left and to the right of the pointer . from these mean valuesone computes a statistics and finds the position of the pointer for which the statistics is maximal .the significance level of this value of is defined as the probability of obtaining it or a smaller value in a random sequence .one then chooses a threshold ( in our case ) and the sequence is cut if the significance level is smaller than the threshold .the cut position is the boundary between two consecutive patches .the procedure continues recursively on the left and right subset created by each cut . before a new cutis accepted one also computes between the right - hand new segment and its right neighbor and between the left - hand new segment and its left neighbor and one checks if both values of are statistically significant according to the selected threshold .the process stops when it is not possible to make new cut with the selected significance . in the present study ,we are mainly interested in directional patches , i.e. patches where the trader consistently buys or sells a large amount of shares . in other wordswe wish to exclude patches in which the inventory of the firm is diffusing randomly , without a drift . to this end for each patchwe compute the total value purchased , the total value sold and the total value .we then consider a patch as directional when either ( buy patch ) or ( sell patch ) .the parameter can be varied and in the present study we set it to .we obtain similar results for different values of such as and . finally in the present paper we consider patches with at least trades .* acknowledgments * authors acknowledge sociedad de bolsas for providing the data and the integrated action italy - spain mesoscopics of a stock market " for financial support .gv , fl , and rnm acknowledge support from miur research project `` dinamica di altissima frequenza nei mercati finanziari '' and nest - dysonet 12911 eu project .em acknowledges partial support from mec ( spain ) throught grants fis2004 - 01001 , mosaico and a ramny cajal contract and comunidad de madrid through grants uc3m - fi-05 - 077 and simumat - cm
the dynamics of many socioeconomic systems is determined by the decision making process of agents . the decision process depends on agent s characteristics , such as preferences , risk aversion , behavioral biases , etc . . in addition , in some systems the size of agents can be highly heterogeneous leading to very different impacts of agents on the system dynamics . the large size of some agents poses challenging problems to agents who want to control their impact , either by forcing the system in a given direction or by hiding their intentionality . here we consider the financial market as a model system , and we study empirically how agents strategically adjust the properties of large orders in order to meet their preference and minimize their impact . we quantify this strategic behavior by detecting scaling relations of allometric nature between the variables characterizing the trading activity of different institutions . we observe power law distributions in the investment time horizon , in the number of transactions needed to execute a large order and in the traded value exchanged by large institutions and we show that heterogeneity of agents is a key ingredient for the emergence of some aggregate properties characterizing this complex system . in many complex systems agents self organize themselves in an ecology of different species " interacting in a variety of ways . agents are not only different in their strategies , information , and preferences , but they can be very different in their size . examples include individual s wealth and firms size . the presence of agents with large size poses several challenging questions . it is likely that large agents impacts the system in a way that is significantly different from small ones . indeed , small agents can easily hide their intentionality , while for large agents this is not so easy and they must adopt strategies taking into account their own effect because revealing their intention could decrease their fitness . financial markets are an ideal system to investigate this problem . there is empirical evidence that market participants are very heterogeneous in size . for example banks and mutual funds size follow zipf s law , i.e. the probability that the size of a participant is larger than decays as . as a consequence large investors usually need to trade large quantities that can significantly affect prices . the associated cost is called market impact . for this reason large investors refrain from revealing their demand or supply and they typically trade their large orders incrementally over an extended period of time . these large orders are called _ packages _ or _ hidden orders _ and are split in smaller trades as the result of a complex optimization procedure which takes into account the investor s preference , risk aversion , investment horizon , etc .. here we investigate the trading activity of a large fraction of the financial firms exchanging a financial asset at the spanish stock market ( bolsas y mercados espaoles , bme ) in the period 2001 - 2004 ( see materials and methods section for a description of data ) . firms are credit entities and investment firms which are members of the stock exchange and are entitled to trade in the market . our approach aims to be a comprehensive approach analysing the overall dynamics of all packages exchanged in the market . however , our database does not contain direct information on packages , so that this information must be statistically inferred from the available data . since we do not have information on clients but only on firms , we develop a detection algorithm ( see material and methods for a description of the algorithm ) which is not sensible to small fluctuation in the buy / sell activity of a firm . the algorithm detects time segments in the inventory time evolution of a firm when the firm acts as a net buyer or seller at an approximately constant rate . we call these segments _ patches _ and we assume that in each of these patches it is contained at least one package . since firms act simultaneously as brokers for many clients , it is rather frequent that in a patch not all the transactions have the same sign . however , a vast majority of firm inventory time series can be partitioned in patches with a well defined direction to buy or to sell . this is probably due to the fact that in most cases the trading activity of a firm is dominated by the activity of one big client . we consider _ directional patches _ , i.e. patches with a well defined direction ( see figure [ series ] ) . the characterizing variables of a directional patch are the time length ( in seconds ) of the patch , measured as the time interval between the first and the last order of the patch , the traded value and the number of trades characterizing the patch . for example , is the number of buy trades and is the purchased value for buy patches . we investigate first the distributional properties of the patches identified by our algorithm . figure [ distrib ] shows the distribution of , , and for the three investigated stocks . the asymptotic behavior of all the three distributions can be approximated by a power law function , where can be , , or and is the exponent characterizing the power law behavior . a summary of the estimated exponents is shown in table [ summary ] from which one can conclude that , , and . our analysis makes explicit the presence of very broad distribution for the three variables characterizing a patch . in fact the very low value of the exponents is consistent with the conclusion that and belong to the domain of lvy stable distributions . this result indicates that in the market there is a huge heterogeneity in the scales characterizing the trading profile of the investors . the volume of the packages is likely to be related to the size of the investors . large investors need to trade large packages to rebalance their portfolio . gabaix _ et al . _ developed a theory which predicts that package size should be power law distributed with an exponent . the value we find for is slightly larger than the one predicted by them . on the contrary , the value derived by the theory in is significantly larger than our estimate ( ) . finally , the power law distribution of packages time length might reflect the heterogeneity of time scales among investors . the distribution of is compatible with the ones obtained by using specialized database describing the investment packages of large investors ( see figure [ distrib ] ) . gabaix _ et al . _ theory predicts the value which is significantly larger than our value ( ) . the presence of power law distribution of investors time scales has been recently suggested in stylized models of investment decisions . .summary of the properties of detected patches . the number in parenthesis nearby the tick symbol is the number of patches detected for the considered stock . rows 1 - 3 : tail exponents of the distribution of , , and estimated with the hill estimator ( or maximum likelihood estimator ) . in parenthesis we report the confidence interval . rows 4 - 6 : exponents of the allometric relations defined in eq . [ scaling ] . the exponents are estimated with pca and the errors are estimated with bootstrap . in parenthesis we report the confidence interval . rows 7 - 9 : percentage of firms with at least patches for which one can not reject the hypothesis of lognormality with confidence according to jarque - bera test . the numbers in parenthesis are the number of firms for which one can not reject the hypothesis of lognormality divided to the number of firms used in the test . [ cols=">,^,^,^",options="header " , ] [ summary ] the role of size heterogeneity in the emergence of power law distributions will be considered at the end of the paper . to complete our characterization of firm patches , we now consider the relation between the variables characterizing each patch . specifically , by applying the principal component analysis ( pca ) to the set of points with coordinates , we investigate the allometric relations between any two of the above variables , i.e. figure [ scatter ] shows the scatter plots and the contour plots for the stock telefnica . in all three cases a clear dependence between the variables is seen . pca analysis shows that the first eigenvalue explains on average , , and of the variance for the first , second , and third allometric relation , respectively , indicating a strong correlation between the variables . the estimated exponents ( see table [ summary ] ) are consistent for different stocks so that the allometric relations are the presence of scaling relations between the variables were first suggested in ref . but it is worth noting that the theory developed in that paper predicts and , and these values are quite different from the ones we estimate from data . the first allometric relation indicates that the number of transactions in which a package is split is approximately proportional to the total traded value of the package . this implies that the mean transaction volume is roughly independent on the size of the package . this mean value is on average determined by the size of the available volume at the best quote indicating that the trader does not trade orders larger than the volume available at the best quote , probably to avoid being too aggressive . we consider the relation between the three variables together by performing a pca on the set of points describing the patches and identified by the coordinates . the set of points effectively lies on a two dimensional manifold which has one dimension much larger than the other . the fact that the first eigenvalue is large indicates that one factor dominates the trading strategy . the allometric relations of the three variables associated with the first eigenvalue of the pca provides an estimation of the exponents ( , , and for telefnica ) which , differently than in the bivariate case , are of course coherent among them and only slightly different from the ones obtained from the bivariate analysis . we now go back to the problem of assessing the role of firm heterogeneity . the first scientific question is : is the fat tailed distribution of , , and due to the fact that individual firms place heterogeneously sized packages or is this an effect of the aggregation of many different firms together ? to answer this question we test the hypothesis that the patches identified for a given firm trading a given stock are lognormally distributed . the test ( see table [ summary ] ) shows that for most of the trading firms we can not reject the hypothesis that the patches have characteristics sizes distributed as a lognormal . since we reject the lognormal hypothesis for the pool obtained by considering all the firms , we conclude that the power law distribution of , and is due to an heterogeneity in patch scale _ between _ different firms rather that _ within _ each firm . the second scientific question about concerns the role of firm heterogeneity for scaling laws . to assess the role of heterogeneity , for each firm we compute the exponents , , and of the bivariate relations of eq . [ scaling ] ( see insets of fig . [ scatter ] ) . we observe that the exponents obtained for each firm are distributed around the corresponding value of the exponent obtained for the pool . this result indicates that the bivariate allometric relations are not an effect of the aggregation but are observed , on average , also for individual firms . in conclusion our comprehensive investigation of packages traded at bme shows that heterogeneity of firms has an essential role for the emergence of power law tails in the investment time horizon , in the number of transactions and in the traded value exchanged by packages . differently , scaling laws between the variables characterizing each package are essentially the same across different firms with the possible exception of the relation between and perhaps reflecting different degree of aggressiveness of firms .
[ [ differential - privacy . ] ] differential privacy .+ + + + + + + + + + + + + + + + + + + + + social and communication networks have been the subject of intense study over the last few years .however , while these networks comprise a rich source of information for science , they also contain highly sensitive private information .what kinds of information can we release about these networks while preserving the privacy of their users ?simple measures , such as removing obvious identifiers , do not work ; for example , several studies ( e.g. , ) reidentified individuals in the graph of a social network even after all vertex and edge attributes were removed .such attacks highlight the need for statistical and learning algorithms that provide rigorous privacy guarantees .differential privacy , which emerged from a line of work started by , provides meaningful guarantees in the presence of arbitrary side information . in a traditional statistical data set , where each person corresponds to a single record ( or row of a table ), differential privacy guarantees that adding or removing any particular person s data will not noticeably change the distribution on the analysis outcome .there is now a rich and deep literature on differentially private methodology for learning and other algorithmic tasks ; see for a recent tutorial .by contrast , differential privacy in the context of graph data is much less developed .there are two main variants of graph differential privacy : _ edge _ and _ node _ differential privacy .intuitively , edge differential privacy ensures that an algorithm s output does not reveal the inclusion or removal of a particular edge in the graph , while node differential privacy hides the inclusion or removal of a node together with all its adjacent edges .edge privacy is a weaker notion ( hence easier to achieve ) and has been studied more extensively , with particular emphasis on the release of individual graph statistics , the degree distribution , and data structures for estimating the edge density of all cuts in a graph .several authors designed edge - differentially private algorithms for fitting generative graph models , but these do not appear to generalize to node privacy with meaningful accuracy guarantees . the stronger notion , node privacy , corresponds more closely to what was achieved in the case of traditional data sets , and to what one would want to protect an individual s data : it ensures that _ no matter what an analyst observing the released information knows ahead of time _ , she learns the same things about an individual alice regardless of whether alice s data are used or not .in particular , no assumptions are needed on the way the individuals data are generated ( they need not even be independent ) .node privacy was studied more recently , with a focus on on the release of descriptive statistics ( such as the number of triangles in a graph ) .unfortunately , differential privacy s stringency makes the design of accurate , node - private algorithms challenging . in this work, we provide the first algorithms for node - private inference of a high - dimensional statistical model that does not admit simple sufficient statistics . [[ modeling - large - graphs - via - graphons . ] ] modeling large graphs via graphons .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + traditionally , large graphs have been modeled using various parametric models , one of the most popular being the stochastic block model .here one postulates that an observed graph was generated by first assigning vertices at random to one of groups , and then connecting two vertices with a probability that depends on the groups the two vertices are members of .as the number of vertices of the graph in question grows , we do not expect the graph to be well described by a stochastic block model with a fixed number of blocks . in this paperwe consider nonparametric models ( where the number of parameters need not be fixed or even finite ) given in terms of a _graphon_. a graphon is a measurable , bounded function ^ 2\to [ 0,\infty) ] to the vertices , and then connecting vertices with labels with probability , where is a parameter determining the density of the generated graph with .we call a -random graph with target density ( or simply a -random graph ) . to our knowledge ,random graph models of the above form were first introduced under the name latent position graphs , and are special cases of a more general model of `` inhomogeneous random graphs '' defined in , which is the first place were -dependent target densities were considered . for both dense graphs ( whose target density does not depend on the number of vertices ) and sparse graphs ( those for which as ) , this model is related to the theory of convergent graph sequences . for dense graphs it was first explicitly proposed in , though it can be implicitly traced back to , where models of this form appear as extremal points of two - dimensional exchangeable arrays ; see ( roughly , their results relate graphons to exchangeable arrays the way de finetti s theorem relates i.i.d .distributions to exchangeable sequences ) . for sparse graphs, offers a different nonparametric approach .[ [ estimation - and - identifiability . ] ] estimation and identifiability .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + assuming that is generated in this way , we are then faced with the task of estimating from a _ single observation _ of a graph . to our knowledge , this task was first explicitly considered in , which considered graphons describing stochastic block models with a fixed number of blocks .this was generalized to models with a growing number of blocks , while the first estimation of the nonparametric model was proposed in .various other estimation methods were proposed recently , for example .these works make various assumptions on the function , the most common one being that after a measure - preserving transformation , the integral of over one variable is a strictly monotone function of the other , corresponding to an asymptotically strictly monotone degree distribution of .( this assumption is quite restrictive : in particular , such results do not apply to graphons that represent block models . ) for our purposes , the most relevant works are and , which provide consistent estimators without monotonicity assumptions ( see `` comparison to nonprivate bounds '' , below ) .one issue that makes estimation of graphons challenging is _ identifiability _ : multiple graphons can lead to the same distribution on .specifically , two graphons and lead to the same distribution on -random graphs if and only if there are measure preserving maps \to[0,1] ] , though this upper bound is loose in many cases . as a specific instantiation of these bounds ,let us consider the case that is exactly described by a -block model , in which case and {k / n}) ] , showing that we do not lose anything due to privacy in this special case .another special case is when is -hlder continuous , in which case and ; see remark [ rem : hoelder - cont - w ] below .[ [ comparison - to - previous - nonprivate - bounds . ] ] comparison to previous nonprivate bounds .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we provide the first consistency bounds for estimation of a nonparametric graph model subject to node differential privacy . along the way , for sparse graphs, we provide more general consistency results than were previously known , regardless of privacy .in particular , to the best of our knowledge , _ no prior results give a consistent estimator for that works for sparse graphs without any additional assumptions besides boundedness .when compared to results for nonprivate algorithms applied to graphons obeying additional assumptions , our bounds are often incomparable , and in other cases match the existing bounds .we start by considering graphons which are themselves step functions with a known number of steps . in the dense case ,the nonprivate algorithms of and , as well as our nonprivate algorithm , give an asymptotic error that is dominated by the term {k / n}) ] where ), they analyze an inefficient algorithm ( the mle ) .the bounds of are incomparable to ours , though for the case of -block graphons , both their bounds and our nonprivate bound are dominated by the term {k / n} ] , we use and to denote the edge set and the adjacency matrix of , respectively . the edge density is defined as the number of edges divided by .finally the degree of a vertex in is the number of edges containing .we use the same notation for a weighted graph with nonnegative edge weights , where now , and .we use to denote the set of weighted graphs on vertices with weights in ] such that for all ] into adjacent intervals of lengths .define }} ] for }} ] by .associated with the -norm is a scalar product , defined as for two matrices and , and for two square integrable functions ^ 2\to{\mathbb{r}} ] and }},w) ] and then setting . if , then has entries in ] into intervals of possibly different lengths .[ [ approximation - by - block - models . ] ] approximation by block models .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in the opposite direction , we can map a function to a matrix by the following procedure .starting from an arbitrary partition of ] into adjacent intervals of lengths .it then follows from the lebesgue density theorem ( see , e.g. , for details ) that as .we will take the above approximation as a benchmark for our approach , and consider it the error an `` oracle '' could obtain ( hence the superscript ) .[ [ convergence . ] ] convergence .+ + + + + + + + + + + + [ sec : convergence ] the theory of graph convergence was first developed , where it was formulated for dense graphs , and then generalized to sparse graphs in .one of the notions of graph convergence considered in these papers is the notion of convergence in metric .the metric in question is similar to the metric , but instead of the -norm , one starts from the cut - norm first defined in , } \bigl| \int_{s\times t}w \bigr|,\ ] ] where the supremum goes over all measurable sets ] is then defined as where the inf goes over all measure preserving bijections on ] and }}-w\|_\square , ] with node - weights one and edge - weights , and given a partition of into sets } \}\ , .\ ] ] we also consider the set of _ fractional -way cuts _, , defined in terms of _ fractional -partitions _a fractional -partition of is a map , where is the simplex ^q\colon \sum_i\rho_i=1\} ] , with a fractional partition now a measurable function \to \delta_q ] into classes , i.e. , over all maps \to [ k] ] . for a fixed input graph , maximizingthe score is the same as minimizing the distance , i.e. the sensitivity of the new score is then bounded by times the maximum degree in ( since only affects the score via the inner product ) .but this is still problematic since , a priori , we have no control over either the size of or the maximal degree of .to keep the sensitivity low , we make two modifications : first , we only optimize over matrices whose entries are of order ( in the end , we expect that a good estimator will have entries which are not much larger than , which is of order ) , and second we restrict ourselves to graphs whose maximum degree is not much larger than one would expect for graphs generated from a bounded graphon , namely a constant times the average degree . while the first restriction is something we can just implement in our algorithm , unfortunately the second is something we have no control over : we need to choose small enough to guarantee privacy for all input graphs , and we have set out to guarantee privacy in the worst case , which includes graphs with maximal degree . here, we employ an idea from : we first consider the restriction of to where will be chosen to be of the order of the average degree of , and then extend it back to all graphs while keeping the sensitivity low . after these motivations , we are now ready to define our algorithm .it takes as input the privacy parameter , the graph , a number of blocks , and a constant that will have to be chosen large enough to guarantee consistency of the algorithm .it outputs a matrix from the set of matrices ^{k\times k } : \text{all entries } b_{i , j}\text { are multiples of } \frac 1 n\}.\ ] ] inside our algorithm , we use an -private algorithm to get an estimate for the edge density of .we do so by setting , where is a laplace random variable with density .the existence of the lipschitz extension used in the algorithm follows from lemma [ lem : lip - extension ] .[ step : rho - approx]compute an -node - private density approximation ( the target maximum degree ) ( the target norm for the matrix ) for each and , let denote a nondecreasing lipschitz extension of from to such that for all matrices , , and define , sampled from the distribution where ranges over matrices in and [ lem : privacy - main ] algorithm [ alg : main - algo ] is -node private . by lemma [ lem : dmns ] from appendix [ app : private ] , the estimate is -private , so we want to prove that the exponential mechanism itself is -private as well . in view of lemma[ lem : exp - mech ] from appendix [ app : private ] , all we need to show is that the the vertex sensitivity of is at most . to this end , we first bound the vertex sensitivity of the original score when restricted to graphs with degree .let be node neighbors . from, we see that } ( a_{xy}-a'_{xy})b_{\pi(x)\pi(y)}\ , , \ ] ] where are the adjacency matrices of and . since and differ in at most entries , the score differs by at most .this is at most , since .since is a lipschitz extension of , the vertex sensitivity of ( over _ all _ neighboring graphs ) is at most , as required .[ thm : final ] let ^ 2\to [ 0,\lambda] ] be a normalized graphon , let , let , , and be an integer . if is the least - squares estimator ( algorithm [ alg : main - algo ] ) , , , then }}\bigr ) \leq { \eps_k^{(o)}}(w)+ 2\eps_n(w ) + o_p{{\left ( { \sqrt[4]{\lambda^2{{\left ( { \frac{\log k}{\rho n } + \frac { k^2}{\rho n^2 } } \right ) } } } } \right)}}.\ ] ] in particular , }}\bigr)\to 0 ] and such that whenever , then we have that and .see appendix [ sec : holder ] for details .theorems [ thm : final ] and [ thm : final ] imply that the sets of fractional -way cuts of the estimator from these theorems provide good approximations to the -way cuts of the graph ( as defined in section [ sec : w - rand ] ) . specifically : [ thm : cuts ] let be an integer .\(i ) under the assumptions of theorem [ thm : final ] , {\lambda^2{{\left ( { { \frac{\log k}{\rho n } + \frac { k^2}{\rho n^2 } } } \right ) } } } } \right)}}.\ ] ] \(ii ) under the assumptions of theorem [ thm : final ] , {\frac{\lambda^2\log k}{\rho n } } + \lambda\sqrt{\frac{k^2\log n}{n\eps}}+\frac { { \lambda}}{n\rho\eps } } \right)}}.\ ] ] the proof of the theorem relies on the theory of graph convergence , in particular the results of , and is given in appendix [ app : cuts ] . when considering the `` best '' block model approximation to , one might want to consider block models with unequal block sizes ; in a similar way , one might want to construct a private algorithm that outputs a block model with unequal size blocks , and produces a bound in terms of this best block model approximation instead of . with more cumbersome notation ,this can be easily proved with our methods , with the minimal block size taking the role of in all our proofs .we leave the details to a journal version .at a high level , our proofs of theorems [ thm : final ] and of [ thm : final ] follow from the fact that for all and , the expected score ] , thus relating the -error of our estimator to the `` oracle error '' defined in .in this section we present the analysis of exact and approximate least squares .this allows us to analyze the nonprivate algorithm .the analysis of the private algorithm ( theorem [ thm : final ] ) requires additional arguments relating the private approximate maximizer to the nonprivate one ; we present these in section [ sec : private - analysis ] ) .our main concentration statement is contained in the following proposition , which we prove in section [ sec : concentration ] below . to state it, we define , for every symmetric matrix with vanishing diagonal , to be the distribution over symmetric matrices with zero diagonal such that the entries are independent bernouilli random variables with .[ cor : concentration ] let , ^{n\times n} ] and expressing the result in terms of instead of then introduces the error term in , and the bounds in theorem [ thm : cuts ] .the following two lemmas contain the core of the argument outlined at the beginning of this section .[ lem : expectations ] let ^{n\times n} ] .by linearity of expectation , we have taking into account the definition of , the lemma follows .our second lemma states that the realized scores are close to their expected values .the proof is based on a careful application of the concentration bounds .the argument is delicate because we must take advantage of the low density ( when is small ) .[ lem : concentration ] let , let ^{n\times n} ] .first , consider a specific pair . recall that } }= 2{{\left \langle { a-{q } } , { b_\pi}\right\rangle}}\,.\ ] ] we wish to bound the deviation of from its mean .set .the quantity is a sum of independent random variables in ] will be chosen in a moment .setting and the assumption implies , and setting , the bound from lemma [ lem : chernoff - mult ] becomes implying that finally , we observe that for any , the maximum of over all ^{k\times k} ] which we can absorb into the error already present . to prove the almost sure statement, we use that almost surely , which by lemma [ lem : good - trho ] ( part 2 ) from appendix [ app : aux ] implies that almost surely . since the error probability in is exponentially small, we can use the borel - cantelli lemma to obtain the a.s .statement . to deduce theorem [ thm : final ] from theorem [ thm : h ], we will bound in terms of , and in terms of .we will show that the leading error in both cases is an additive error of . to do this ,we need two lemmas .[ lem : equi - part ] fix and . 1 . for each equipartition \to [ k] ], is an equipartition .2 . for all equipartitions \to [ k] ] such that .any equipartition must have exactly classes of size and classes of size , where are determined by the equations , ; and any partition with these properties is an equipartition .the statement follows . to state the next lemma, we define the _ standard equipartition _ of ] , where .note that , , and .[ lem : equi - part - bd ] let be a symmetric matrix with nonnegative entries , and let be the standard equipartition of ] unless lies in one of the sets , or , ] ( note that the set is empty , so that here we only have to consider ) . in a similar way ,the difference in is bounded by , and the difference in is bounded by .the total contribution of all these sets can then be bounded by we start by bounding .let \to [ k] ] into adjacent intervals of lengths . by the triangle inequality, the fact that the set of measure preserving bijections \to [ 0,1] ] . all together , this proves that { \lambda^2{{\left ( { \frac{k^2 } { n^2\rho } + \frac{\log k } { n\rho } } \right ) } } } } \right)}}.\ ] ] the corresponding a.s . bound follows again from the fact that a.s . , and the fact that all other failure probabilities are exponentially small .next fix such that it is a minimizer in .that implies that is obtained from by averaging over a partition of into classes , which in particular implies that .together with lemma [ lem : equi - part - bd ] this implies that there is an equipartition \to [ k] ] , we then bound ^\sigma\|_2 \\ & \leq \|{{w[{b_\pi } ] } } - w\|_2+\hat\delta_2({{h_n}(w)},w ) \\ & \leq { \eps_k^{(o)}}(w)+\eps_n(w)+\sqrt{\frac { 4k}\lambda n } , \end{aligned}\ ] ] where in the first line we use ^\sigma ] .together with this completes the proof of the theorem .in this section we prove consistency of the private algorithms .our analysis relies on some basic results on differentially private algorithms from previous work , which are collected in appendix [ app : private ] .compared to the analysis of the non - private algorithms , we need to control several additional error sources which were not present for the nonprivate algorithm .in particular , we will have to control the error between and , the fact that the algorithm ( approximately ) maximizes instead of , and the error introduced by the exponential sampling error .the necessary bounds are given by the following lemma .to state it , we denote the maximal degree in by .[ lem : private - output ] let be the output of the randomized algorithm [ alg : main - algo ] . then the following properties hold with probability at least with respect to the coin flips of the algorithm : 1 ) .\2 ) if and , then observing that , we get that which immediately gives ( 1 ) . to prove ( 2 ) , we first use ( 1 ) and the assumptions on and to bound this implies that the extended score is equal to the original score .we conclude the proof by using lemma [ lem : exp - mech ] to show that with probability at least , the exponential mechanism returns a matrix such that where . bounding , this completes the proof of the lemma .theorem [ thm : final ] will follow from the following theorem in the same way as theorem [ thm : final ] followed from theorem [ thm : h ] .[ thm : h ] under the assumptions of theorem [ thm : final ] , { \frac{\lambda^2\log k}{\rho n } } + \lambda\sqrt{\frac{k^2\log n}{n\eps } } + \frac { { \lambda}}{n\rho\eps } } \right)}}.\ ] ] moreover , if we replace the assumption in theorem [ thm : final ] by the stronger assumption , then a.s . as , { \frac{\lambda^2\log k}{\rho n } } + \lambda\sqrt{\frac{k^2\log n}{n\eps } } + \frac { \sqrt\lambda}{n\rho\eps } } \right ) } } + o(1 ) .\end{aligned}\ ] ] with probability at least , we may assume that the output of the private algorithm obeys the conclusions of lemma [ lem : private - output ] . with a decrement in probability of at most , we then have that next use the assumption , the fact that , and lemma [ lem : max - degree ] from appendix [ app : aux ] with to show that at a decrement in probability of at most , the maximal degree in is at most . lemma [ lem : private - output ] then allows us to use proposition [ cor : concentration ] with this introduces an additional error term into the bound and an extra error term of order in the upper bound , leading to the estimate that , with probability at least , { { \lambda^2 } { { \left ( { \frac{k^2}{n^2\rho } + \frac{\log k}{n\rho } } \right ) } } } + \lambda\sqrt{\frac{k^2\log n}{n\eps } } } \right)}}\ ] ] and from here on we proceed as in the proof of , except that we now move from the minimizer for to a matrix by rounding the entries of down to the nearest multiple of . instead of, we now obtain the bound { \lambda^2 { { \left ( { \frac{k^2}{n^2\rho } + \frac{\log k}{n\rho } } \right ) } } } + \lambda\sqrt{\frac{k^2\log n}{n\eps } } } \right ) } } + o{{\left ( { \sqrt\lambda\bigl|\frac{\hat\rho}\rho-1\bigr| } \right ) } } , \label{eq : oneofourbounds}\ ] ] a bound which is valid with probability at least . now the fact that and implies that {\frac{\lambda^2 } { n } } } \right)}}+ o_p{{\left ( { \frac { \sqrt\lambda}{n\rho\eps } } \right)}}.\ ] ] combining this with , we obtain that with probability at least , { \lambda^2 { { \left ( { \frac{k^2}{n^2\rho } + \frac{\log k}{n\rho } } \right ) } } } + \lambda\sqrt{\frac{k^2\log n}{n\eps } } + \frac { \sqrt\lambda}{n\rho\eps } } \right ) } } .\end{aligned}\ ] ] the contribution of the failure event can now be bounded by to complete the proof of the bound in probability , we have to add the error terms from and .we can simplify the resulting expression somewhat by first noting that the left hand side of eq .is of order at most , which shows that for the bound on not to be vacuous , we need .we can therefore drop the term inside the fourth root of . furthermore , by the assumption of the theorem , , which shows that the first term in is {\lambda^2/n}) ] instead of }},{{w[{\hat b_\pi}]}}) ] that is constant on rectangles of the form , where form a partition of the interval ] ( see appendix [ sec : sampling - k - block ] ) .our nonprivate estimator then has asymptotic error {\frac k n + \frac{\log k}{\rho n } + \frac { k^2}{\rho n^2 } } ] . for and density , the error of our estimatoris again dominated by the ( unavoidable ) error of {k / n} ] .these bounds apply to estimating the edge - probability matrix , but do not apply directly to estimating an underlying block graphon .lemma [ lem : sampling - k - block ] shows that converges to in the metric at a rate of {k / n}) ] for estimation of .this is the best known nonprivate rate , and is matched by our nonprivate rate . in the sparse case , where as , wolfe and olhede showed under additional assumptions ( roughly , that entries of are bounded above and below by multiples of ) that the mle produces an estimate of that satisfies {\frac{\log^2(1/\rho)\log(k)}{n\rho } } } \right)}} ] for estimating an underlying -block graphon . note that when is small , any of these three terms may dominate the rate . [ [ hlder - continuous - graphons . ] ] hlder - continuous graphons .+ + + + + + + + + + + + + + + + + + + + + + + + + + + the known algorithms for estimating continuous graphons proceed by fitting a -block model to the observed data , and arguing that this model approximates the underlying graphon .our results show that if is constant and is -hlder continuous ( lipschitz continuity corresponds to ) , then the nonprivate error scales as {\frac{\log n } { \rho n } } + n^{-\alpha/2} ] for an appropriate choice of .see remark [ rem : hoelder - cont - w ] for details . in the dense case ( ) , that one can estimate a -hlder continuous graphon by a -block graphon with error with the last term accounting for the difference between estimating and . setting to the optimal value of a rate which except for the case is dominated by the term .our nonprivate bound matches this bound for , and is worth for , while the private one is always worth . analyse the mle in the sparse case , again restricting to -block models .they show - block graphons in which all intervals have size ( since allowing nonuniformly sized blocks only makes their bounds worse ) , the original graphon takes values in a range ] , as stated in the introduction .the notion of node - privacy defined in section [ sec : diff - p ] `` composes '' well , in the sense that privacy is preserved ( albeit with slowly degrading parameters ) even when the adversary gets to see the outcome of an adaptively chosen sequence of differentially private algorithms run on the same data set .[ lem : composition ] if an algorithm runs randomized algorithms , each of which is -differentially private , and applies an arbitrary ( randomized ) algorithm to their results , i.e. , then is -differentially private .this holds even if for each , is selected adaptively based on .[ [ output - perturbation . ] ] output perturbation .+ + + + + + + + + + + + + + + + + + + + [ sec : sens ] one common method for obtaining efficient differentially private algorithms for approximating real - valued functions is based on adding a small amount of random noise to the true answer .a _ laplace _ random variable with mean and standard deviation has density .we denote it by . in the most basic framework for achieving differential privacy , laplace noiseis scaled according to the _ global sensitivity _ of the desired statistic .this technique extends directly to graphs as long as we measure sensitivity with respect to the metric used in the definition of the corresponding variant of differential privacy .below , we explain this ( standard ) framework in terms of node privacy .let denote the set of all graphs .[ def : global - sensitivity ] the -global node sensitivity of a function is : for example , the edge density of an -node graph has node sensitivity , since adding or deleting a node and its adjacent edges can add or remove at most edges .in contrast , the number of nodes in a graph has node sensitivity .[ laplace mechanism ][lem : dmns ] the algorithm ( which adds i.i.d .noise to each entry of ) is -node - private .thus , we can release the number of nodes , , in a graph with noise of expected magnitude while satisfying node differential privacy . given a public bound on , we can release the number of edges , , with additive noise of expected magnitude .[ [ exponential - mechanism . ] ] exponential mechanism .+ + + + + + + + + + + + + + + + + + + + + + sensitivity plays a crucial role in another basic design tool for differentially private algorithms , called the _ exponential mechanism_.suppose we are given a collection of functions , from to , each with sensitivity at most .the exponential mechanism , due to , takes a data set ( in our case , a graph ) and aims to output the index of a function in the collection which has nearly maximal value at , that is , such that .the algorithm samples an index such that [ lem : exp - mech ] the algorithm is -differentially private .moreover , with probability at least , its output satisfies [ [ lipschitz - extensions . ] ] lipschitz extensions .+ + + + + + + + + + + + + + + + + + + + + there are cases ( and we will encounter them in this paper ) , where the sensitivity of a function can only be guaranteed to be low if the graph in question has sufficiently low degrees .in this situation , it is useful to consider extensions of these functions from graphs obeying a certain degree bound to those without this restriction. let denote the set of graphs with degree at most .given functions and , we say is a vertex lipschitz extension of from to if agrees with on and has the same node - sensitivity as , that is we close this section with the proof of lemma [ lem : lip - extension ] .the existence of follows from a very general result ( e.g. , ) , which states that for any metric spaces and such that , and any lipschitz function , there exists an extension with the same lipschitz constant .the explicit , efficient construction of extensions for linear functions is due to .the idea is to replace with the maximum of where ranges over weighted subgraphs of with ( weighted ) degree at most .it is the value of the following linear program : ^{n\timesn } \text { is symmetric , and } \\ c_{i ,j } \leq a(g)_{i , j } \text { for all } i , j \text { , and }\\ \sum_{j \neq i } c_{i , j } \leq d \text { for all } i \in [ n]\ , . \end{cases}\ ] ] see for the analysis of this program s properties .[ lem : good - trho ] let ^ 2\to[0,\lambda] ] , let , let and assume that is bounded away from zero. then 1 . , and , so in particular for any .2 . let }}\|_2 ] with and .observing that only those terms contribute for which either , or , and bounding \leq \|w\|_2 ^2\leq \|w\|_\infty \|w\|_1 ] where .fix a -block graphon , and let be the lengths of the `` blocks '' , that is the intervals defining the block representation ( so that and ) . given a sample of i.i.d .uniform values in ] with of finding a permutation of ] .we say is correctly aligned if its interval is contained in the block in which landed .for each block , we can ensure that of the points that landed in get aligned with ( the term accounts for the fact that up to of the length at each end of the interval does not line up exactly with one of the intervals ) .thus , the number of points that get incorrectly aligned is at most .each misaligned point contributes at most to the total squared error , so we have each term in the norm on the right - hand side is the deviation of a binomial from it s mean , and has standard deviation . this upper bounds the expected absolute deviation by jensen s inequality .thus , . the sum is maximized when for all ; it then takes the value .hence . by jensen s inequality ,{k / n} ] .fix a matrix , and let denote the best -block approximation to in the norm ( that is , the minimizer of ) . given a uniform i.i.d . sample in , let denote the matrix } ] . by the triangle inequality , lemma [ lem : sampling - k - block ] bounds by {k / n} ] is -hlder continuous for some ] , we can use the hlder continuity of to conclude that }}\|_2\leq \|w-{{w[{\tilde h}]}}\|_\infty\leq c\bigl(\frac 2n\bigr)^\alpha.\ ] ] to prove the lemma , it is therefore enough to prove that }}-{{w[{h}]}}\|_2 ^ 2\bigr]=o(n^{-\alpha } ) , \ ] ] where ] into adjacent intervals of lengths .then let . for , is an average over points in , implying that this appendix , we prove the following theorem which implies theorem [ thm : cuts ] by the same arguments as those which lead from theorems [ thm : h ] and [ thm : h ] to theorems [ thm : final ] and [ thm : final ] .[ thm : cuts - from - h ] let be an integer .\(i ) under the assumptions of theorem [ thm : final ] , { \lambda^2{{\left ( { \frac{k^2 } { n^2\rho } + \frac{\log k } { n\rho } } \right ) } } } } \right)}}.\ ] ] \(ii ) under the assumptions of theorem [ thm : final ] , {\frac{\lambda^2\log k}{\rho n } } + \lambda\sqrt{\frac{k^2\log n}{n\eps}}+\frac { { \lambda}}{n\rho\eps } } \right)}}.\ ] ] before we prove the theorem , we start with a few bounds on the hausdorff distance of various sets of -way cuts . first , using the definition of the cut - distance ( and the fact that the set of -way cuts of a graph is invariant under relabelings ) , it is easy to see ( see also ) that whenever and are weighted graphs on ] , then implying that in a similar way , we have that for two graphons , we will also need to compare the fractional and integer cuts , and . to do so , one can use a simple rounding argument , as in theorem 5.4 and its proof from .this gives the bound instead of ( leading to the factor on the right hand side of ) , and that hausdorff distances were defined with respect to the -norm ( leading to a bound which is better by a factor than the bounds in ) . ] valid for any weighted graph with node weights on ] with probability at least . by the assumptions of the two theorems , .we apply to show that with probability at least . as a consequence , again with probability , by and , this implies that with the same probability next we apply to the weighted graph . since , we conclude that with probability at least , since for all and all , we can easily absorb the failure event , getting to complete the proof , we proceed as in the proof of to show that { \lambda^2{{\left ( { \frac{k^2 } { n^2\rho } + \frac{\log k } { n\rho } } \right ) } } } } \right)}}.\ ] ] combined with the bound from theorem [ thm : final ] , the fact that the cut - norm is bounded by the -norm , and the fact that , we conclude that }}\bigr ) \leq { \hat \eps_k^{(o)}}({{h_n}(w ) } ) + o_p{{\left ( { \sqrt[4 ] { \lambda^2{{\left ( { \frac{k^2 } { n^2\rho } + \frac{\log k } { n\rho } } \right ) } } } } \right)}}.\ ] ] combined with , , , and the bound , this proves .the proof of is essentially identical , except that now we use theorem [ thm : final ] .[ lem : chernoff - mult ] let be independent random variables taking values in $ ] , and let .if and , then for , the probability is at most let denotes the exact mean of ( so ) .the standard multiplicative form of the chernoff bound states that for ( not necessarily less than 1 ) , we have setting ( that is , ) , the bound above becomes both of these terms are bounded above by : the first , since ; and the second , since .
we design algorithms for fitting a high - dimensional statistical model to a large , sparse network without revealing sensitive information of individual members . given a sparse input graph , our algorithms output a node - differentially - private nonparametric block model approximation . by node - differentially - private , we mean that our output hides the insertion or removal of a vertex and all its adjacent edges . if is an instance of the network obtained from a generative nonparametric model defined in terms of a graphon , our model guarantees consistency , in the sense that as the number of vertices tends to infinity , the output of our algorithm converges to in an appropriate version of the norm . in particular , this means we can estimate the sizes of all multi - way cuts in . our results hold as long as is bounded , the average degree of grows at least like the log of the number of vertices , and the number of blocks goes to infinity at an appropriate rate . we give explicit error bounds in terms of the parameters of the model ; in several settings , our bounds improve on or match known nonprivate results . = 1
blood is a heterogeneous multi - phase mixture of solid corpuscles ( red blood cells , white blood cells and platelets ) suspended in a liquid plasma which is an aqueous solution of proteins , organic molecules and minerals ( refer to figure [ bloodcomponents ] ) .the rheological characteristics of blood are determined by the properties of these components and their interaction with each other as well as with the surrounding structures .the blood rheology is also affected by the external physical conditions such as temperature ; however , in living organisms in general , and in large mammals in particular , these conditions are regulated and hence they are subject to minor variations that can not affect the general properties significantly .other physical properties , such as mass density , may also play a role in determining the blood overall rheological conduct .the rheological properties of blood and blood vessels are affected by the body intake of fluids , nutrients and medication , although in most cases the effect is not substantial except possibly over short periods of time and normally does not have lasting consequences .the viscosity of blood is determined by several factors such as the viscosity of plasma , hematocrit level ( refer to figures [ vishemaplot ] and [ vissrplot ] ) , blood cell distribution , and the mechanical properties of blood cells .the blood viscosity is also affected by the applied deformation forces , extensional as well as shearing , and the ambient physical conditions .while the plasma is essentially a newtonian fluid , the blood as a whole behaves as a non - newtonian fluid showing all signs of non - newtonian rheology which includes deformation rate dependency , viscoelasticity , yield stress and thixotropy .most non - newtonian effects originate from the red blood cells due to their high concentration and distinguished mechanical properties such as elasticity and ability to aggregate forming three - dimensional structures at low deformation rates .deep understanding of the blood rheology , which includes its non - newtonian characteristics , is important for both diagnosis and treatment .looking into the existing biological literature and comparing to the non - biological literature , such as earth science studies , it is obvious that the non - newtonian phenomena in the blood circulation has not been given sufficient attention in the biological studies .one reason is the complexity of the biological systems which makes the consideration of non - newtonian effects in blood circulation more difficult to handle .hence , to simplify the flow models and their computational implementation , blood is generally assumed newtonian and these effects are ignored .the obvious difficulties in observation , experimentation and measurement related to _ in vivo _ blood flow add another complicating factor. moreover , apart from some rare and extreme pathological states , the non - newtonian effects in blood flow are relatively mild as compared to the non - newtonian effects exhibited by typical polymeric systems for instance .this makes the approximation of blood as a newtonian fluid an acceptable assumption and not far from reality in a significant part of the circulatory system under normal conditions .many theoretical , numerical , experimental and clinical studies of non - newtonian effects in blood circulation have been conducted in the last few decades . however , there is no general approach in tackling this problem in a systematic way based on a unified vision .almost all the past studies focus on individual problems and deal with the existing non - newtonian phenomena within a limited local context .the current study , which is basically a brief overview of this subject , is trying to deal with the non - newtonian blood rheology in general as applied to all levels of the circulation system .as indicated already , blood is a complex non - newtonian fluid showing various signs of non - newtonian rheology such as shear thinning , yield stress and viscoelasticity . the blood is also characterized by a distinctive thixotropic behavior revealed by the appearance of hysteresis loops during shearing cycles .these non - newtonian properties do not affect the flow patterns inside the flow paths and the fluid transportation only but they also affect the mechanical stress on the blood vessel walls and the surrounding tissues especially in cases of irregular lumen geometry like stenosed arteries .the mechanical stress on the vessel wall and tissue is not only important for its direct mechanical impact , especially when sustained over long periods of time , but it can also contribute to the commencement and advancement of long term lesions such as forming sediments inside the vessel wall .the non - newtonian properties , like viscoelasticity , have also an impact on other transport phenomena such as pulse wave propagation in arteries .non - newtonian effects in general are dependent on the magnitude of deformation rates and hence they can exist or be enhanced at certain flow regimes such as low shear rates .the non - newtonian effects are also influenced by the type of deformation , being shear or elongation .the impact of the non - newtonian effects can be amplified by a number of factors such as pathological blood rheology and flow in stenosed vessels and stents .an interesting finding of one study is that although flow resistance and wall shear stress increases as the size of stenosis increases for a given rheological model , the non - newtonian nature of blood acts as a regulating factor to reduce the resistance and stress and hence contribute to the body protection . in this context, shear thinning seems to have the most significant role in facilitating blood flow through stenotic vessels .blood is a predominantly shear thinning fluid ( refer to figure [ vissrplot ] ) , especially under steady flow conditions , and this property has the most important non - newtonian impact .shear thinning is not a transient characteristics ; moreover it is demonstrated at most biological flow rates although it is more pronounced at low deformation regimes .shear thinning rheology arises from disaggregation of the red blood cells with increasing shear rate .this same reason is behind the observed thixotropic blood behavior as shearing forces steadily disrupt the structured aggregation of blood cells with growing deformation time . the origin of other non - newtonian effects can also be traced back to the blood microstructure as will be discussed next .the viscoelastic nature of blood basically arises from its corpuscular microstructure .viscoelastic properties originate from the red blood cells , which are distinguished by their pronounced elastic deformability associated with the ability to aggregate forming three - dimensional structures known as rouleaux .the aggregation is mostly demonstrated at low shear rates and hence non - newtonian behavior in general and viscoelasticity in particular are more pronounced at these regimes of low deformation .the viscoelastic effects are magnified , if not activated , by the pulsatile nature of blood flow .the viscoelastic effects in blood circulation should not be limited to the viscoelastic properties of the blood itself but also to the viscoelastic ( or elastic depending on the adopted model ) properties of the blood vessels and the porous tissue through which the blood is transported .this can be justified by the fact that all these effects are manifested by the blood circulation and hence they participate , affect and affected by the circulation process .an interaction between the viscoelastic behavior of blood with that of the vessel wall and porous tissue is unavoidable consequence .blood also demonstrates yield stress although there is a controversy about this issue .yield stress arises from the aggregation of red blood cells at low shear rates to form the above - mentioned three - dimensional micro - structures ( rouleaux ) that resist the flow .studies have indicated that yield stress is positively correlated to the concentration of fibrinogen protein in blood plasma and to the hematocrit level .an illustrative plot of the dependence of yield stress on hematocrit level is shown in figure [ yshemaplot ] .other factors , such as the concentration of minerals , should also have a contribution .many of the blood rheological characteristics in general , and non - newtonian in particular , are also controlled or influenced by the fibrinogen level .the yield stress characteristic of blood seems to vanish or become negligible when hematocrit level falls below a critical value .yield stress contributes to the blood clotting following injuries and subsequent healing , and may also contribute to the formation of blood clots ( thrombosis ) and vessel blockage in some pathological cases such as strokes .the value of yield stress , as reported in a number of clinical and experimental studies , seems to indicate that it is not significant and hence it has no tangible effect on the flow profile ( and hence flow rate ) at the biological flow ranges in large and medium size blood vessels .however , it should have more significant impact in the minute capillaries and some porous structures where flow at very low shear rates occurs. the magnitude of yield stress and its effect could be aggravated by certain diseased states related to the rheology of blood , like polycythemia vera , or the structure of blood vessels such as stenoses . as a shear thinning fluid, blood is also characterized by a thixotropic behavior , which is confirmed experimentally by a number of studies , due to the intimate relation between these two non - newtonian properties .this may also explain a possible controversy about the thixotropic nature of blood as the thixotropic - like behavior may be explained by other non - newtonian characteristics of blood . despite the fact that thixotropy is a transient property , due to the pulsative nature of the blood flow the thixotropic effects may have long term impact on the blood circulation .this equally applies to the time - dependent effects of viscoelasticity .thixotropy is more pronounced at low shear rates with a long time scale .the effect , however , seem to have a less important role in blood flow than other non - newtonian effects such as shear thinning , and this could partly explain the limited amount of studies dedicated to this property .the thixotropic behavior of blood is very sensitive to the blood composition and hence it can demonstrate big variations between different individuals and under different biological conditions . it should be remarked that time dependent effects in general , whether thixotropic or viscoelastic in nature or of any other type , should be expected in the flow of blood due to a number of reasons .one reason is the pulsatility of blood flow and the rapid change in the deformation conditions during the systolic - diastolic cardiac cycle .another reason is the rapid change in the shear magnitude between one part of the system to another part , i.e. different shear rates between the arteries , capillaries , porous tissue and venous part .a third reason is the irregular shape , such as bends and converging - diverging formations , of the blood flow conduits which activates or accentuates time - dependent effects .a fourth reason is the difference in the deformation rates between the ventricular systole and diastole .the last reason may explain the indication of one study that the non - newtonian effects are more important at diastole than systole since the shear rates during diastole are expected to be lower than those at systole .another remark is that most of the reported non - newtonian rheological parameters , as well as many other physical properties of blood , are obtained from _ in vitro _measurements and hence they are subject to significant errors as an indicator to the _ in vivo _ values due to the difference in ambient conditions as well as the experimental requirements and procedures , such as using additives to preserve and fluidize the blood samples , that can introduce significant variations on the blood properties. moreover , the reported values could be highly dependent on the measurement method .the differences between the individual subjects and their conditions like dietary intake prior to measurement , most of which are difficult to control or quantify , should add to the uncertainties and fluctuations .hence , most of these values should be considered with caution especially when used for _ in vivo _ and patient - specific modeling and investigation .blood is a complex non - newtonian fluid and hence reliable modeling of blood flow in the circulation system should take into account its non - newtonian characteristics .several non - newtonian rheological models have been used to describe the blood rheology ; these models include carreau - yasuda , casson , power law , cross , herschel - bulkley , oldroyd - b , quemada , yeleswarapu , bingham , eyring - powell , and ree - eyring .the constitutive equations of these rheological models are given in table [ bloodmodelstable ] .other less known fluid models have also been used to describe the rheology of blood .a quick inspection of the blood literature reveals that the most popular models in non - newtonian hemorheologic and hemodynamic modeling are carreau - yasuda and casson . [ !.the non - newtonian fluid models that are commonly used to describe blood rheology .the meanings of symbols are given in nomenclature [ nomenclature ] .the symbols that define fluid characteristic properties , such as , are generically used and hence the same symbol may represent different physical attributes in different models . some of these models may have more than one form ; the one used in this table is the widespread of its variants .the last column represents the frequently obtained non - newtonian properties from these models in the context of blood modeling although other properties may also be derived and employed in modeling other materials .[ cols="<,<,<",options="header " , ] blood is also modeled as a newtonian fluid which is a good approximation in many circumstances such as the flow in large vessels at medium and high shear rates under non - pathological conditions . as there is no sudden transition from non - newtonian to newtonian flow as a function of shear rate ,there is no sharply - defined critical limit for such a transition and hence this remains a matter of choice which depends on a number of objective and subjective factors .however , there seems to be a general consensus that the shear rate range for which non - newtonian effects are considered significant is s ; above this limit the blood is generally treated as a newtonian liquid .no single model , newtonian or non - newtonian , can capture all the features of the blood complexities and hence different models are used to represent different characteristics of the blood rheology .these models , whether newtonian or non - newtonian , obviously have significant differences and hence they can produce very different results . the results also differ significantly between newtonian and non - newtonian models in most cases .the non - newtonian models vary in their complexity and ability to capture different physical phenomena .diverse methods have been used in modeling and simulating non - newtonian effects in blood rheology ; these include analytical , stochastic , and numerical mesh methods ; such as finite element , finite difference , finite volume , and spectral collocation methods .as indicated previously , most , if not all , non - newtonian characteristics arise from the blood microstructure and particularly the concentration , distribution and mechanical properties of the red blood cells . for example , the viscoelastic properties of blood originate from the mechanical properties of the suspended cells and their capability of elastic deformation and structural formation , while the thixotropic properties arise from steady disaggregation of blood cells over prolonged shearing time .interestingly , the majority of the rheological models used to describe the blood rheology are bulk phenomenological models of empirical nature with little consideration , if any , to its highly influential micro - structure .hence , more structurally - based models , such as those based on molecular dynamics , are required to improve the description and modeling of the blood rheological behavior .because the impact of the non - newtonian effects is highly dependent on the shape and size of the flow conduits , different non - newtonian rheological behavior , and hence different flow modeling approaches , should apply to the different parts of the circulatory system .different approaches are also required because of the difference in the nature of the blood transportation processes in these parts , such as large scale bulk flow in the large vessels as opposite to perfusion or diffusion in the porous tissue .we can identify three types of circulatory subsystems in which non - newtonian effects should be analyzed and modeled differently : large blood vessels which mainly apply to arteries and veins , small blood vessels which broadly include capillaries and possibly arterioles , and porous tissue such as the myocardium and muscles in general .these three subsystems are graphically illustrated in figure [ subsystemfig ] .the distinction between large and small vessels is not clear cut as it depends on the nature of the flow phenomenon under consideration and the associated circumstances . however , the distinctive feature that should be used as a criterion to differentiate between these two categories of blood vessels in this context is the validity of the continuum approximation as applied to the blood where in the large vessels this approximation is strictly held true while in the small vessels it approaches its limits as some non - continuum phenomena start to appear . in the following subsections we outline general strategies for modeling non - newtonian effects in the circulation subsystems ., width=163 ] , width=182 ] , width=211 ] in large vessels , which include large cavities such as the ventricles and atria inside the myocardium as well as the large arteries and veins , the blood essentially behaves as a newtonian fluid .one reason is that the blood in such large lumens and cavities is normally exposed to a relatively high shear rates and hence the non - newtonian effects which are basically induced at low shear rates die out .also at this large scale the blood appears as a homogeneous continuum medium with diminishing effect of blood cell aggregation .the interaction between the blood cells with their pronounced elastic properties is also minimal at this scale .however , in some pathological situations , non - newtonian effects are important even in the big cavities and large vessels and therefore they should be considered in the flow model .this may also be true in some non - pathological situations in which the non - newtonian effects can be critical to the observed phenomena .it should be remarked that the non - newtonian effects in the venous part of the circulatory system should be more important than the arterial part due to the lower deformation rates in the former than the latter as this seems to be an accepted fact in the hemodynamic studies .however , we did not find a proper discussion about this issue in the available literature .the difference in the blood composition in these two parts ; due for example to the difference in concentration of substances like nutrients , oxygen and metabolic wastes ; should also affect the non - newtonian rheology in these two subsystems and introduce more complications in modeling blood flow especially in large vessels .like the previous issue , we did not find an explicit discussion to this issue in the available literature .another remark is that certain parts of the large vessels network can contain spots of low shear rates such as bends and bifurcation junctions and hence non - newtonian effects in large vessels could be significant in some cases where these spots play an exceptionally important role in the blood flow due to a diseased case for instance .several mathematical and computational models have been used to describe the flow of blood in large individual vessels .these models include the elastic one - dimensional navier - stokes and the rigid hagen - poiseuille for newtonian fluids , as well as many other non - newtonian rheological models such as cross and carreau - yasuda , as discussed previously is section [ modelingsec ] .the characteristics of blood flow in large single vessels are obtained from these mathematical models either analytically or numerically , e.g. through the use of finite element or finite difference techniques .most of the employed non - newtonian fluid models are generalized newtonian models and hence they do not account for history - dependent elastic or thixotropic effects . also , the analytical non - newtonian models generally apply to rigid tubes only although there are some attempts in this context to extend poiseuille flow to elastic vessels with non - newtonian rheology .numerical methods may also be used to extend the non - newtonian models to elastic vessels . with regards to the flow in vascular networks of large vessels , the main models used to describe and simulate blood flow are the one - dimensional navier - stokes finite element model for elastic networks and the hagen - poiseuille model for rigid networks . both of these models are newtonian although the second may be extended to poiseuille - like non - newtonian flow through the inclusion of time - independent non - newtonian effects using a vessel - dependent non - newtonian effective viscosity which is computed and updated iteratively to reach a consistent flow solution over the whole network .based on a non - thorough inspection of the available literature , there seems to be no extension to the traditional one - dimensional navier - stokes distensible network model to incorporate non - newtonian effects .in fact we did not find serious attempts in the available literature to extend the navier - stokes equation in general ( whether one - dimensional or multi - dimensional , for rigid or elastic conduits ) to account for non - newtonian effects , although there are attempts to incorporate non - newtonian effects numerically into navier - stokes flow models .the navier - stokes equation with its nonlinearity is sufficiently complex to be solved for newtonian flow in most cases let alone with the added complexities and nonlinearities introduced by the non - newtonian rheology . with regards to the one - dimensional navier - stokes distensible model for single vessels and networks ,we propose two possible general approaches for extending this model to account for non - newtonian effects .one approach is to accommodate these effects in the fluid viscosity as parameterized by the viscosity friction coefficient .a second possible approach is to incorporate these effects in the flow profile as described by the momentum flux correction factor in the one - dimensional model . for the network ,the second approach is based on defining a vessel - dependent field over the whole network .a similar viscosity field may also be required for the first approach as well .non - newtonian effects are generally more pronounced in small flow ducts , such as capillaries , than in large ducts like arteries due to several reasons such as the deterioration of the continuum assumption at small scales especially for complex dispersed systems like blood . in such vesselsthe continuum approximation reaches its limit and the effect of blood cell aggregation with their interaction with the vessel wall becomes pronounced .this activates the non - newtonian rheological flow modes such as the induction of elastic effects which are associated with the elastic properties of the red blood cells and their structural formation .also , the non - newtonian effects of blood are more prominent at low shear rates which are the dominant flow regimes in the small vessels .hence non - newtonian rheological effects should be considered in modeling , simulating and analyzing the flow of blood in small vessels .the commonly used approach in modeling blood perfusion in living tissue is to treat the tissue as a spongy porous medium and employ darcy law which correlates the volumetric flow rate to the pressure gradient .there are several limitations in the darcy flow model in general and in its application to the blood flow through biological tissue in particular , and hence remedies have been proposed and used to improve the model . since this lawis originally developed for the flow through rigid porous media , modified versions are normally used to account for elasticity as required for modeling biological tissue .other limitations include neglecting edge effects and the restriction imposed by the laminar low velocity assumption on which the darcy flow is based .the former may be overcome by employing the boundary term in the brinkman equation while the latter can be eliminated through the use of the forchheimer model which incorporates a high - velocity inertial term . since darcy law in its original formulationis based on the newtonian flow assumptions , non - newtonian rheology is generally ignored in the modeling of blood perfusion through porous tissue .there have been several extensions and modifications to the darcy law to include non - newtonian effects in the flow of fluids in general , and polymers in particular , through rigid non - biological porous media .these attempts include , for example , viscoelastic models , herschel - bulkley , power law , blake - kozeny - carman , as well as other non - newtonian prototypes .pore scale network modeling has also been used to accommodate various non - newtonian effects ; such as shear thinning , yield stress and viscoelasticity ; in the flow of polymers through rigid porous media .similarly , other computational techniques , such as stochastic lattice boltzmann , have been tried to simulate and investigate the non - newtonian effects of the flow through rigid porous media in non - biological studies .however , it seems there is hardly any work on modeling the non - newtonian effects in the blood flow through living tissues by incorporating these effects into the distensible darcy flow model . to conclude , non - newtonian effects in the blood perfusion through porous tissue are not negligible in general due to the existence of fluid shearing and extensional forces which activate non - newtonian rheology .as the deformation rates in this type of transportation is normally low , and considering the small size and tortuous converging - diverging shape of the porous space inside which the blood perfuses , non - newtonian rheological effects are expected to be significant . non - newtonian rheological effects associated with other fluid transport phenomena , such as diffusion , could be negligible in such porous space due to the absence of shearing and extensional forces as a result of lack of large scale fluid bulk movement in these micro- and nano - scale phenomena .however , the causes underlying the non - newtonian rheology should affect these transport phenomena as well , although more serious investigations are required to reach any definite conclusion about these issues .the existing literature is , unfortunately , limited in this scope .blood is essentially a non - newtonian suspension .its complex rheological non - newtonian behavior is largely influenced by its microstructure which , through the essentially viscous water - based newtonian plasma combined with the effect of aggregation , deformation and orientation of the suspended blood cells with their distinguished elastic and three - dimensional structural formation properties , can show diverse non - newtonian effects at various shear rate regimes and through different flow conduit structures although these effects are more pronounced at certain flow regimes and in particular structures .blood rheological properties , and its mechanical characteristics in general , are affected by several factors such as the type and magnitude of deformation rate , hematocrit level and protein concentration . because blood is a suspension , its properties can be strongly influenced by the shape and size of its flow conduits .the non - newtonian effects of blood can be accentuated by certain pathological conditions such as hypertension and myocardial infarction .apart from some extreme diseased states where the non - newtonian effects play an exceptionally important role in the fluid transport phenomena , the non - newtonian effects are generally mild in the bulk flow of blood in large vessels .more important influence of non - newtonian rheology occurs in the flow of blood through small vessels and in the blood perfusion through porous tissue .other transport phenomena , like diffusion , which do not involve bulk flow associated with deformation forces of shearing or extensional type that activate non - newtonian rheology , should not be affected directly by the non - newtonian characteristics although the physical causes at the root of the non - newtonian rheology should have an impact on these processes .more fundamental studies are required to reach specific conclusions about these issues .non - newtonian rheology , such as viscoelasticity , of the blood vessel walls and the spongy porous tissue should also be considered as a contributor to the overall non - newtonian behavior of blood circulation as these effects are both non - newtonian and circulatory in nature like the ones demonstrated by the blood itself .the effect of fluid - structure interaction should also be included in analyzing , modeling and simulating of non - newtonian effects in the blood circulation as it plays an important hemodynamic and hemorheologic role .100 t. bodnr ; a. sequeira ; m. prosi . ., 217(11):50555067 , 2011 .k. breithaupt - grgler ; m. ling ; h. boudoulas ; g.g ., 96:26492655 , 1997 .vlastos ; c.c .tangney ; r.s .rosenson . ., 28(1):4149 , 2003 .a. marcinkowska - gapiska ; j. gapinski ; w. elikowski ; f. jaroszyk ; l. kubisz . . ,45(9):837844 , 2007 .j. tripette ; g. loko ; a. samb ; b.d .gogh ; e. sewade ; d. seck ; o. hue ; m. romana ; s. diop ; m. diaw ; k. brudey ; p. bogui ; f. ciss ; m.d .hardy - dessources ; p. connes . ., 299(3):h908h914 , 2010 .baskurt ; h.j .meiselman . ., 29(5):435450 , 2003 . c. fisher ; j.s .rossmann . ., 131(9):091004(19 ) , 2009 .lee ; s. xue ; j. nam ; h. lim ; s. shin . . ,23(1):16 , 2011 .l. dintenfass . ., 11:233239 , 1962 .long ; a. ndar ; k.b . manning ; s. deutsch . ., 51(5):563566 , 2005 .. gijsen ; e. allanic ; f.n .van de vosse ; j.d . janssen . ., 32(6):601608 , 1999 .merrill ; c.s .cheng ; g.a .pelletier . ., 26(1):13 , 1969 .thurston . ., 12(9):12051217 , 1972 .morris ; c.m .smith ii ; p.l .blackshear jr . ., 52(2):229240 , 1987 .t. sochi . ., 85(2):489503 , 2010 .t. sochi . ., 51(22):50075023 , 2010 .merrill . ., 49(4):863888 , 1969 . y. da kang ; y. yu - bing ; l. zhao - rong . . , 21(9):10581065 , 2000 .t. sochi . ., 30(6):12021217 , 2009 .r. revellin ; fr .rousset ; d. baud ; j. bonjour . ., 6:19 , 2009 .huang ; w. fabisiak . ., 8:18 , 1976 ., 70(1):133 , 1997 .t. sochi . ., 48(23):24372467 , 2010 .j. chen ; x - y .lu ; w. wang . ., 39:19831995 , 2006 .z. lou ; w.j ., 26(1):3749 , 1993 .b. liu ; d. tang . ., 17(2):5560 , 2011 .misra ; s. maiti . ., 79(6):061003061021 , 2012 .golpayeghani ; s. najarian ; m.m .movahedi . . ,1(3):167174 , 2008 .y. fan ; w. jiang ; y. zou ; j. li ; j. chen ; x. deng . ., 25(2):249255 , 2009 ., 6(7 - 8):s4s13 , 1985 .t. ishikawa ; l.f.r .guimaraes ; s. oshima ; r. yamane . ., 22:251264 , 1998 .mandal ; s. mukhopadhyay ; g.c ., 39(3):209231 , 2012 . c. huang; z. chai ; b. shi . ., 13(3):916928 , 2013 .shukla ; r.s .parihar ; b.r.p ., 42(3):283294 , 1980 .mandal . . ,40(1):151164 , 2005 .k. perktold ; g. karner ; a. leuprecht ; m. hofer . ., 79(s1):187190 , 1999 .b. das ; p.c .johnson ; a.s .popel . . ,37(3):239258 , 2000 .j. jung ; r.w .lyczkowski ; c.b .panchal ; a. hassanein . ., 39(11):20642073 , 2006 .hell ; a. balzereit ; u. diebold ; h.d .bruhn . . ,40(6):539546 , 1989 .g. pontrelli . ., 10(2):187 , 2000 .s. ani ; c.j .hartley ; d. rosenstrauch ; j. tambaa ; g. guidoboni ; a. mikeli . ., 34(4):575592 , 2006 . r.l .replogle ; h.j .meiselman ; e.w .merrill . ., 36(1):148160 , 1967 .morris ; d.l .rucknagel ; r. shukla ; r.a .gruppo ; c.m .smith ; p. blackshear jr . ., 37(3):323338 , 1989 . c. picart; j - m . piau ; h. galliard ; p. carpentier . ., 42(1):112 , 1998 .fedosov ; w. pan ; b. caswell ; g. gompper ; g.e .karniadakis . . ,108(29):1177211777 , 2011 .p. davenport ; s. roath . ., 34(1):106107 , 1981 . c. alonso ; a.r .pries ; p. gaehtgens . ., 265(2):h553h561 , 1993 .t. sochi . . ,78(3 - 4):582585 , 2011 .t. sochi . . ,04(03):1350011 , 2013 .gijsen ; f.n .van de vosse ; j.d . janssen . ., 35(4 - 5):263279 , 1998 .box ; r.j .van der geest ; m.c.m .rutten ; j.h.c ., 40(5):277294 , 2005 . a. jonov ; j. vimmr ., 8(1):1017910180 , 2008 . j. vimmr ; a. jonsov ., 15(3):193203 , 2008 .kim ; p.j .vandevord ; j.s .58(7):803825 , 2008 .m. lukov - medvidov ; a. zaukov . ., 56(8):14091415 , 2008 .sankar ; a.i.md ., 2009:115 , 2009 .a. hundertmark - zaukov ; m. lukov - medvidov . . ,60(3):572590 , 2010 .molla ; m.c ., 34(8):10791087 , 2012 .j. boyd ; j.m .buick ; s. green . ., 14:33953399 , 2007 .johnston ; p.r .johnston ; s. corney ; d. kilpatrick . ., 39(6):11161128 , 2006 .f. abraham ; m. behr ; m. heinkenschloss . ., 8(2):127137 , 2005 .a. valencia ; a. zarate ; m. galvez ; l. badilla . ., 50(6):751764 , 2006 .k. vajravelu ; s. sreenadh ; p. devaki ; k.v ., 9(5):13571365 , 2011 .g. pontrelli . ., 27(3):367380 , 1998 .b. das ; g. enden ; a.s ., 25(1):135153 , 1997 .f. yilmaz ; m.y .gundogdu . ., 20(4):197211 , 2008 .j. zueco ; o.a ., 5(2):116 , 2009 .r. ouared ; b. chopard . ., 121(1 - 2):209221 , 2005 .fu ; w.w.f .leung ; r.m.c ., 14(1):126152 , 2013 .fu ; w.w.f .leung ; r.m.c ., 14(1):153173 , 2013 .d. chapelle ; j.f .gerbeau ; j. sainte - marie ; i.e. vignon - clementel . ., 46(1):91101 , 2010 .t. sochi . . , 2013arxiv:1304.2320 .t. sochi . .t. sochi . . , 2013arxiv:1305.2546 .t. sochi . .phd thesis , imperial college london , 2007 .t. sochi ; m.j ., 60(2):105124 , 2008 .t. sochi . ., 1(2):239256 , 2010 .l. formaggia ; d. lamponi ; a. quarteroni . ., 47(3/4):251276 , 2003 .sherwin ; v. franke ; j. peir ; k. parker . ., 47(3 - 4):217250 , 2003 .collins ; c.j.h ., 28(6):365377 , 1997 . v. prokop ; k. kozela .. , 80(8):17251733 , 2010 .huyghe ; t. arts ; d.h .van campen ; r.s .reneman . ., 262(4):h1256h1267 , 1992 .a .- r.a . khaled ; k. vafai . ., 46(26):49895003 , 2003 .o. coussy . .john wiley & sons ltd , 1st edition , 2004 .o. coussy . .john wiley & sons ltd , 1st edition , 2010 .vankan ; j.m .huyghe ; j.d .janssen ; a. huson ; w.j.g .hacking ; w. schreiner . ., 35(4):375385 , 1997 .sadowski ; r.b ., 9(2):243250 , 1965 .gogarty ; g.l .levy ; v.g . fox . . ,1972 . m.l .de haro ; j.a.p . del ro ; s. whitaker . ., 25(2):167192 , 1996 .garrouch . . , 1999al - fariss ; k.l .pinder . . , 1984a. fadili ; p.m.j .tardy ; j.r.a .pearson . ., 106(2):121146 , 2002 .al - nimr ; t.k ., 47(1):125133 , 2004 .w. kozicki ; c. tiu . ., 27(1):3138 , 1988 .l. kondic ; p. palffy - muhoray ; m.j .shelley . ., 54(5):r4536r4539 , 1996 . l. kondic ; m.j .shelley ; p. palffy - muhoray . ., 80(7):14331436 , 1998 .sorbie ; p.j .clifford ; e.r.w ., 130(2):508534 , 1989 .x. lopez . .phd thesis , imperial college london , 2004 .balhoff . .phd thesis , louisiana state university , 2005 .perrin ; p.m.j .tardy ; s. sorbie ; j.c .crawshaw . ., 295(2):542550 , 2006 .e.s . boek ; j. chin ; p.v .coveney . ., 17:99102 , 2003 .sullivan ; l.f .gladden ; m.l . johns . ., 133(23):9198 , 2006 .s. kumar ; s.n .upadhyay . ., 19(1):7579 , 1980 .cummings ; b.y .wang ; d.j .evans ; k.j ., 94(3):21492158 , 1991 .ll & shear rate + & characteristic shear rate + & rate of strain tensor + & characteristic time constant + & relaxation time + & retardation time + & fluid viscosity + & plasma viscosity + & zero - shear - rate viscosity + & infinite - shear - rate viscosity + & shear stress + & stress tensor + & characteristic shear stress + & yield - stress + & volume concentration + + & carreau - yasuda index + & consistency coefficient + & maximum volume fraction for zero shear rate + & maximum volume fraction for infinite shear rate + & cross model index + & power law index + & upper convected time derivative +
blood is a complex suspension that demonstrates several non - newtonian rheological characteristics such as deformation - rate dependency , viscoelasticity and yield stress . in this paper we outline some issues related to the non - newtonian effects in blood circulation system and present modeling approaches based mostly on the past work in this field . keywords : hemorheology ; hemodynamics ; blood properties ; biorheology ; circulatory system ; fluid dynamics ; non - newtonian ; shear thinning ; yield stress ; viscoelasticity ; thixotropy . non - newtonian rheology in blood circulation taha sochi university college london , department of physics & astronomy , gower street , london , wc1e 6bt + email : t.sochi.ac.uk .
the idea to describe the stock market share prices as a random walk dates back to .he assumed that price returns and waiting times are independent , identically distributed ( i.i.d . ) random variables which in the long time limit , according to the central limit theorem ( clt ) , conform to the normal distribution . the properties of random walks are well understood [ see for example ] and since low - frequency market data conforms fairly well to a normal distribution the concept of a continuous time random walk ( ctrw ) has been incorporated into standard economical textbooks .this approach enabled to formulate a theory of pricing financial derivatives for which they received the nobel price in economics .however there are serious problems with ctrws .the observed probability density function ( pdf ) of price returns shows considerable discrepancies from a gaussian [ ] in particular it exhibits power law tails for large values of . and managed to generalize bachelier s approach by introducing lvy distributions with the normal distribution being only a particular representative of these .lvy distributions , functions labeled by two real parameters and , are limit distributions for sums of independent random variables which emerge in the generalized clt . taking and ensures that the functions are both positive , even , behave like a steep gaussian for small values of and decay according to a pareto s power law much slower than a gaussian for large s . despite technical problems , the second moment being infinite for , these functions have been successfully used to model relaxation processes ( die - electrical , mechanical and nmr ) and in the theory of probability .recently , an investigation of about 40 million price quotes at the new york stock exchange ( nyse ) showed that the cumulative probability density of price returns behaves like with which yields a value of beyond the valid range if the pdf were to be modeled by a lvy distribution .lvy distributions are well defined for but can take negative values in that regime hence are not good candidates for pdfs .furthermore in most treatments so far it is assumed that price returns are independent of waiting times .this does not seem to be the case and it also seems counterintuitive .one would expect larger price variations to correspond to larger waiting times since one would expect it to be more difficult , take more time , for a broker to find buyers and sellers that balance out supply and demand if the price variation is larger than if it was smaller .such simplified reasoning implies that there should be a positive correlation between s and s which has not been taken into account in many works on this subject .in this paper we will generalize the theory to take account of these two difficulties .we will drop the assumption that s and s are independent random variables but still retain the property of independence of variables corresponding to different times . by assuming that it is a certain function of and that conforms asymptotically to the lvy distribution rather than either price returns or waiting times themselves we will work out the joint pdf .theoretical results will be compared to some high - frequency market data .the paper is organized as follows . in section [ sec : theory ] we formulate our theory and work out analytical formulas for pdfs and cumulative pdfs under various assumptions about how price returns and waiting times are correlated . in section [ sec : mastereq ] we set up an equation for the probability of having price at time and show that price evolution does not satisfy the markovian property .we compare our evolution equation to that worked out under different assumptions .finally in sections [ sec : cumuldistr ] , [ sec : goodfit ] and [ sec : correl ] we fit the cumulative pdfs of high - frequency exchange rate and low - frequency century stock data to our theory , discuss the goodness of fit and assess whether the correlations between s and s which follow from our theory conform to market data .in the following we will adhere to the convention that random variables are denoted by capital letters and values of random variables by lower - case letters .the objective is to work out the joint pdf of price returns and waiting times under the assumption that the random variables and are not independent . the first idea which comes into a mind of a physicistis to assume a weak dependence and to develop a perturbative approach .one could for example consider the correlation coefficient between and : - e[x]e[t]}{\sqrt{e[x^2 ] - e[x]^2 } \sqrt{e[t^2 ] - e[t]^2}}\ ] ] with ] takes , for large , the following form : where is a random variable conforming to a lvy distribution with parameters and .equation ( [ eq : genclt ] ) is a statement of the generalized clt under the assumption that the increments are i.i.d random variables .we also assume that instead of one stock there can be a set of stocks the prices of which are correlated with each other and so depends on all stock price returns , ie , and conforms to the lvy distribution ( [ eq : genclt ] ) . in order to work out the distribution of the returns and waiting times we consider a transformation of variables in the -dimensional space with cartesian coordinates : we require the elementary probabilities to be conserved : where for are certain functions of the variables which we set equal to one .now the problem is to make a proper choice of the function .we will assume that is a hypotenuse in a right triangle with edges of lengths and ( see fig.[fig : randwalk ] ) .therefore and the most natural transformation ( [ eq : mapping ] ) that comes into mind here is a mapping into spherical polar coordinates : where and for .the jacobian of the transformation ( [ eq : trafoi ] ) reads : where and the joint pdf of price returns and waiting times reads : integrating over all values with a fixed modulus we obtain the pdf which corresponds to the length of the vector of price returns and to the waiting time : in equation ( [ eq : jointpdf ] ) we applied a mapping into spherical polar coordinates in -dimensional space the jacobian of which , according to equation ( [ eq : jacobian ] ) , reads : finally we work out the cumulative density of waiting times + and that of price returns : now , the double integral on the right - hand - side in ( [ eq : cumuldistr1 ] ) is computed by going to polar coordinates and .the cumulative density reads : where . since the marginal pdfs and take the same form , it is readily seen that the joint pdf in ( [ eq : jointpdf ] ) does not change when and are mutually exchanged , the cumulative density of returns is given by the same formula as ( [ eq : waitingtimedistrfct ] ) except that is replaced by .let us also write down the large expansion of the cumulative density . where is the beta function .the expansion ( [ eq : waitingtimedistrfctexp ] ) evaluated by making use of an asymptotic expansion of the lvy function and may be useful for numerical calculations of for large values of .it is interesting to compare these results with a different assumption for the function .assume that it is the sum of moduli of price returns and the waiting time that conforms to the lvy distribution .looking at fig.[fig : randwalk ] one could say that it is the sum of neighboring horizontal and vertical sections in one knee of the random staircase that conforms to the lvy distribution in large time limit .the transformation of variables ( [ eq : mapping ] ) is now given by : where and .the jacobian reads and the joint pdf takes the form . the joint pdf which depends on the modulus of the vector in the norm reads : the cumulative probability distribution of waiting times reads : and that of price returns takes the form : for both cases considered above the cumulative density of either waiting times of price returns takes the form : where and and the kernels depend on the function and on the number of stocks .the functional form of allows us to work out an asymptotic expansion of the cumulative density of waiting times and that of price returns : .probability distribution functions in a ctrw with non - independent increments .the kernels and determine the cumulative densities in the following way : where and .the constants and which define the asymptotic , large argument behavior read , and [ tab : pdfs ] [ cols= " < , < , < , < , < , < " , ] e.scalas et al . , fractional calculus and continuous - time finance , physica a 284 ( 2000 ) 376 f.mainardi et al . ,fractional calculus and continuous - time finance ii : the waiting time distribution , physica a 287 ( 2000 ) 468481 l.sabatelli et al ., waiting time distributions in financial markets , eur .j. b * 27 * ( 2002 ) 273275
a theory which describes the share price evolution at financial markets as a continuous - time random walk has been generalized in order to take into account the dependence of waiting times on price returns . a joint probability density function ( pdf ) which uses the concept of a lvy stable distribution is worked out . the theory is fitted to high - frequency us $ /japanese yen exchange rate and low - frequency 19th century irish stock data . the theory has been fitted both to price return and to waiting time data and the adherence to data , in terms of the test statistic , has been improved when compared to the old theory . stochastic processes ; continuous - time random walk ; lvy stable distributions ; contingency table ; interpolation ; curve fitting ; statistical finance ; econophysics 02.50.ey ; 02.50.wp ; 02.60.ed ; 89.90.+n
the pagerank algorithm ( pra ) is a cornerstone element of the google search engine which allows to perform an efficient information retrieval from the world wide web ( www ) and other enormous directed networks created by the modern society during last two decades .the ranking based on pra finds applications in such diverse fields as physical review citation network , scientific journals rating , ranking of tennis players and many others .the pra allows to find efficiently the pagerank vector of the google matrix of the network whose values enable to rank the nodes . for a given network with nodes the google matrixis defined as where the matrix is obtained from an adjacency matrix by normalizing all nonzero colummns to one ( ) and replacing columns with only zero elements by ( _ dangling nodes _ ) . for the www an element of the adjacency matrix is equal to unity if a node points to node and zero otherwise . here is the unit column vector and is its transposition .the damping parameter in the www context describes the probability to jump to any node for a random surfer . for wwwthe google search uses .the matrix belongs to the class of perron - frobenius operators naturally appearing for markov chains and dynamical systems . for is only one maximal eigenvalue of .the corresponding eigenvector is the pagerank vector which has nonnegative components with , which can be ranked in decreasing order to give the pagerank index .for www it is known that the probability distribution of values is described by a power law with , corresponding to the related cumulative dependence with at .the pagerank performs ranking which in average is proportional to the number of ingoing links , putting at the top the most known and popular nodes .however , in certain networks outgoing links also play an important role .recently , on the examples of the procedure call network of linux kernel software and the wikipedia articles network , it was shown that a relevant additional ranking is obtained by considering the network with inverse link directions in the adjacency matrix corresponding to and constructing from it a reverse google matrix according to relation ( [ eq1 ] ) at the same .the eigenvector of with eigenvalue gives then a new pagerank with ranking index , which was named cheirank .it rates nodes in average proportionally to the number of outgoing links highlighting their communicative properties . for www onefinds so that the decay of cheirank is characterized by a slower decay exponent compared to pagerank . in fig .[ fig1 ] , we show pagerank and cheirank distributions for the www networks of the universities of cambridge and oxford ( 2006 ) , obtained from the database . and cheirank versus the corresponding rank indexes and for the www networks of cambridge 2006 ( left panel ) and oxford 2006 ( right panel ) ; here ( ) and the number of links is ( ) for cambridge ( oxford).,scaledwidth=70.0% ] due to importance of pagerank for information retrieval and ranking of various directed networks it is important to understand how it is affected by the variation of the damping parameter . in the limit the pagerank is determined by the eigenvectors of the highly degenerate eigenvalue .these eigenvectors correspond by definition to invariant subspaces through the matrix .it is known that in general these subspaces correspond to sets of nodes with ingoing links from the rest of the network but no outgoing link to it .these parts of the network have been given different names in the literature ( rank sink , out component , bucket , and so on ) .in this paper , we show that for large matrices of size up to several millions the structure of these invariant subspaces is universal and study in detail the universal behavior of the pagerank at related to the spectrum of , using an optimized arnoldi algorithm .we note that this behavior is linked to the internal structure of the network . indeed , it is possible to randomize real networks by randomly exchanging the links while keeping exactly the same number of ingoing and outgoing links .it was shown in that this process generally destroys the structure of the network and creates a huge gap between the first unit eigenvalue and the second eigenvalue ( with modulus below ) . in this casethe pagerank simply goes for to the unique eigenvector of the matrix associated with the unit eigenvalue .the paper is organized as follows : in section 2 we discuss the spectrum and subspace structure of the google matrix ; in section 3 we present the construction of invariant subspaces , the numerical method of pagerank computation at small damping factors is given in section 4 , the projected power method is described in section 5 , universal properties of pagerank are analyzed in section 6 and discussion of the results is given in section 7 .in order to obtain the invariant subspaces , for each node we determine iteratively the set of nodes that can be reached by a chain of non - zero matrix elements . if this set contains all nodes of the network , we say that the initial node belongs to the _ core space _ . otherwise , the limit set defines a subspace which is invariant with respect to applications of the matrix . in a second stepwe merge all subspaces with common members , and obtain a sequence of disjoint subspaces of dimension invariant by applications of .this scheme , which can be efficiently implemented in a computer program , provides a subdivision of network nodes in core space nodes ( typically 70 - 80% of ) and subspace nodes belonging to at least one of the invariant subspaces inducing the block triangular structure , where the subspace - subspace block is actually composed of many diagonal blocks for each of the invariant subspaces .each of these blocks correspond to a column sum normalized matrix of the same type as and has therefore at least one unit eigenvalue thus explaining the high degeneracy .its eigenvalues and eigenvectors are easily accessible by numerical diagonalization ( for full matrices ) thus allowing to count the number of unit eigenvalues , e.g. 1832 ( 2360 ) for the www networks of cambridge 2006 ( oxford 2006 ) and also to verify that all eigenvectors of the unit eigenvalue are in one of the subspaces .the remaining eigenvalues of can be obtained from the projected core block which is not column sum normalized ( due to non - zero matrix elements in the block ) and has therefore eigenvalues strictly inside the unit circle .we have applied the arnoldi method ( am ) with arnoldi dimension to determine the largest eigenvalues of . for both example networksthis provides at least about 4000 numerical accurate eigenvalues in the range .for the two networks the largest core space eigenvalues are given by ( 0.999982435081 ) with a quite clear gap ( ) .we also mention that the largest subspace eigenvalues with modulus below 1 also have a comparable gap . in order to obtain this accuracyit is highly important to apply the am to and not to the full matrix ( see more details below ) . in the latter casethe am fails to determine the degeneracy of the unit eigenvalue and for the same value of it produces less accurate results .( blue dots or crosses ) and core space eigenvalues ( red dots ) in ( green curve shows unit circle ) ; here ( 30579 ) , there are 1543 ( 1889 ) invariant subspaces , with maximal dimension 4656 ( 1545 ) and the sum of all subspace dimensions is ( 30579 ) .the core space eigenvalues are obtained from the arnoldi method applied to the block with arnoldi dimension 20000 and are numerically accurate for ._ middle row : _ eigenvalue spectrum for the matrix , corresponding to the cheirank , for cambridge 2006 ( left panel ) and oxford 2006 ( right panel ) with red dots for core space eigenvalues ( obtained by the arnoldi method applied to with ) , blue crosses for subspace eigenvalues and the green curve showing the unit circle . _ bottom row : _fraction of eigenvalues with for the core space eigenvalues ( red bottom curve ) and all eigenvalues ( blue top curve ) from top row data .the number of eigenvalues with is 3508 ( 3275 ) of which 1832 ( 2360 ) are at ; it larger than the number of invariant subspaces which have each at least one unit eigenvalue.,scaledwidth=70.0% ] in fig .[ fig2 ] we present the spectra of subspace and core space eigenvalues in the complex plane as well as the fraction of eigenvalues with modulus larger than , showing that subspace eigenvalues are spread around the unit circle being closer to than core eigenvalues .the fraction of states with has a sharp jump at , corresponding to the contribution of , followed by an approximate linear growth .we now turn to the implications of this structure to the pagerank vector ; it can be formally expressed as let us first assume that is diagonalizable ( with no non - trivial jordan blocks ) .we denote by its ( right ) eigenvectors and expand the vector in this eigenvector basis with coefficients . inserting this expansion in eq .( [ pagerank1 ] ) , we obtain in the case of non - trivial jordan blocks we may have in the second sum contributions with some integer smaller or equal to the size of the jordan block .suppose we have for example a jordan block of dimension 2 with a principal vector such that with the corresponding eigenvector . from thiswe obtain for arbitrary integer the following condition on the 1-norm of these vectors : showing that one should have if . even if this condition is hard to fulfill for all if is close to 1 . in generalthe largest eigenvalues with modulus below 1 are not likely to belong to a non - trivial jordan block ; this is indeed well verified for our university networks since the largest core space eigenvalues are not degenerate . here( [ pagerank2 ] ) indicates that in the limit the pagerank converges to a particular linear combination of the eigenvectors with , which are all localized in one of the subspaces . for a finite value of scale of this convergence is set by the condition ( ) and the corrections for the contributions of the core space nodes are . in order to test this behaviorwe have numerically computed the pagerank vector for values . for , the usual power method (iterating the matrix on an initial vector ) is very slow and in many cases fails to converge with a reasonable precision . in order to get the pagerank vector in this regime, we use a combination of power and arnoldi methods that allowed us to reach the precision : after each iterations with the power method we use the resulting vector as initial vector for an arnoldi diagonalization choosing an arnoldi matrix size ; the resulting eigenvector for the largest eigenvalue is used as a new vector to which we apply the power method and so on until convergence by the condition is reached . for the university network data of in most casesthe values and ( for cambridge 2006 ) provide convergence with about iterations of the process ( for ) .additional details are given below .in order to construct the invariant subspaces we use the following scheme which we implemented in an efficient computer program . for each node we determine iteratively a sequence of sets , with and containing the nodes which can be reached by a non - zero matrix element from one of the nodes .depending on the initial node there are two possibilities : a ) increases with the iterations until it contains all nodes of the network , especially if one set contains a dangling node connected ( by construction of ) to all other nodes , or b ) saturates at a limit set of small or modest size . in the first case , we say that the node belongs to the _ core space _ . in the second casethe limit set defines a subspace of dimension which is invariant with respect to applications of the matrix .we call the initial node the _ root node _ of this subspace ; the members of do not need to be tested themselves as initial nodes subsequently since they are already identified as _ subspace nodes_. if during the iterations a former root node appears as a member in a new subspace one can absorb its subspace in the new one and this node loses its status as root node .furthermore , the scheme is greatly simplified if during the iterations a dangling node or another node already being identified as core space node is reached . in this caseone can immediately attribute the initial node to the core space as well . for practical reasons it may be useful to stop the iterationif the set contains a macroscopic number of nodes larger than where is some constant of order one and to attribute in this case the node to the core space .this does not change the results provided that is above the maximal subspace dimensions . for the university networks we studied ,the choice turned out to be sufficient since there is always a considerable number of dangling nodes . in this way, we obtain a subdivision of the nodes of the network in core space nodes ( typically 70 - 80% of ) and subspace nodes belonging to at least one of the invariant subspaces .however , at this point it is still possible , even likely , that two subspaces have common members .therefore in a second step we merge all subspace with common members and choose arbitrarily one of the root nodes as the `` root node '' of the new bigger subspace which is of course also invariant with respect to .we can also mention that most of the subspaces contain one or more `` zero nodes '' ( of first order ) with outgoing links to the subspace but no incoming links from the same or other subspaces ( but they may have incoming links from core space nodes as every subspace node ) .these nodes correspond to complete zero lines in the corresponding diagonal block for this subspace in the matrix and therefore they produce a trivial eigenvalue zero . furthermore , there are also zero nodes of higher order ( ) which have incoming subspace links only from other zero nodes of order resulting in a non - trivial jordan block structure with eigenvalue zero . in other words ,when one applies the matrix to a vector with non - zero elements on all nodes of one subspace one eliminates successively the zero nodes of order and finally the resulting vector will have non - zero values only for the other `` non - zero nodes '' . due to this any subspace eigenvector of with an eigenvalue different from zero ( and in particular the pagerank vector ) can not have any contribution from a zero node . in a third step of our schemewe therefore determined the zero nodes ( of all orders ) and the reduced subspaces without these zero nodes .the results for the distribution of subspace dimensions is discussed in section 6 ( see the left panel of fig . [ fig7 ] ) .the distribution is essentially unchanged if we use the reduced subspaces since the number of zero nodes is below of for most of universities . only for the matrix ofwikipedia we have about of zero nodes that reduces the value of from 21198 to 11625 .once the invariant subspaces of are known it is quite obvious to obtain numerically the exact eigenvalues of the subspaces , including the exact degeneracies .thus , using the arnoldi method we determine the largest remaining eigenvalues of the core projected block . in fig .[ fig2 ] the complex spectra of subspace and core space eigenvalues of and are shown for the two networks of cambridge 2006 and oxford 2006 as well as the fraction of eigenvalues with modulus larger than indicating a macroscopic fraction of about 2% of eigenvalues with . in table 1 , we summarize the main quantities of networks studied : network size , number of network links , number of subspace nodes and average subspace dimension for the university networks considered in fig . [ fig4 ] and the matrix of wikipedia ..[table1 ] network parameters [ cols="<,^,^,^,^",options="header " , ] in fig .[ fig4 ] we compare these gap values to the other university networks for which we found by the arnoldi method larger gaps . versus its rank index for the university networks with a small core space gap ., scaledwidth=70.0% ] in fig .[ fig5 ] we show the eigenvectors obtained by the projected power method versus their rank index defined by the ordering of the components of theses vectors .we can clearly identify the exponential localization on 40 nodes for leeds 2006 or 110 nodes for cambridge 2002 , 2003 and 2005 with values below ( leeds 2006 ) or ( cambridge 2002 , 2003 and 2005 ) .the case cambridge 2004 with a quite larger gap provides at first the same exponential localization as the other three cases of cambridge but after 50 nodes it goes over to a tail in the range to . in all casesthe range of values of the small tail is in qualitative agreement with the gap values in the table 2 and the expression ( [ good_lambda_gap ] ) .when the iteration with the matrix starts at the maximal node the vector diffuses first quite slowly inside the localization domain for a considerable number of iterations ( 46 for leeds 2006 and 35 for cambridge 2002 , 2003 and 2005 ) until it reaches a dangling node at which point the diffusion immediately extends to the full network since the dangling node is artificially connected to all nodes .however , at this point the probability of the amplitude is already extremely small .therefore the initial node belongs technically to the core space ( since it is `` connected '' to all other nodes ) but practically it defines a quasi subspace ( since the probability to leave the localization domain is very small or ) . at , which is much larger than the gap, this quasi subspace also contributes to the pagerank in the same way as the exact invariant subspaces .this provides somehow a slight increase of the effective value of but it does not change the overall picture as described in section 2 . fig .[ fig5 ] also shows that apparently the particular network structure responsible for this quasi subspace behavior is identical for the three cases cambridge 2002 , 2003 and 2005 . for cambridge 2004this structure also exists but here there is one additional dangling node which is reached at an earlier point of the initial slow diffusion providing delocalization on a scale . for the case of cambridge 2006 with a `` large '' gap this structure seems to be completely destroyed but this may be due to one single modified matrix element if compared to the networks of the previous years .using the powerful numerical methods described above we turn to the analysis of universal properties of pagerank .[ fig6 ] clearly confirms the theoretical picture given in section 2 of the limit behavior for the pagerank at . in particular onecan clearly identify the limit where it is localized in the invariant subspaces with only small corrections at the core space nodes .we also determine the eigenvector of the largest core space eigenvalue of the projected matrix . in the lower panels of fig .[ fig6 ] , we compare the pagerank at with this vector ( normalized by the 1-norm ) multiplied by . we observe that except for a very small number of particular nodes this vector approximates quite well the core space correction of the pagerank even though the corrections due to the second term in ( [ pagerank2 ] ) are more complicated with contributions from many eigenvectors . in the inserts , we also show the fidelity of the pagerank , which decays from 1 at to about 0.188 ( 0.097 ) at , and the residual weight of the core space in the pagerank which behaves as [ for . for .numerical precision is such that . _bottom row : _ at .blue crosses correspond to the eigenvector of the largest core space eigenvalue ( 0.999982435081 ) multiplied by .the arrow indicates the first position where a site of the core space contributes to the rank index ; all sites at its left are in an invariant subspace .insert shows the residual weight with of the core space in the pagerank and the difference versus where is the pagerank fidelity with respect to , i.e. .note that since the pagerank is normalized through the 1-norm : .the limiting value ( 0.097481331613 ) is obtained from linear extrapolation from the data with smallest values of which we verified to be exact up to machine precison ., scaledwidth=70.0% ] as mentioned in the previous section , we also determine the subspace structure and the pagerank at for other university networks available at and for the matrix of wikipedia with and ( it turns out that the matrix for wikipedia provides only very few small size subspaces with no reliable statistics ) .a striking feature is that the distribution of subspace dimensions is universal for all networks considered ( fig .[ fig7 ] left panel ) .the fraction of subspaces with dimensions larger than is well described by the power law with the dimensionless variable , where is the average subspace dimension .the fit of all cases gives .it is interesting to note that the value of is close to the exponent of poincar recurrences in dynamical systems .possible links with the percolation on directed networks ( see e.g. ) are still to be elucidated .the rescaled pagerank ( or cheirank for the case of wikipedia ) takes a universal form with a power law for with an exponent and close to zero for ( see right panel of fig .[ fig7 ] ) . with dimensions larger than as a function of the rescaled variable .upper curves correspond to cambridge ( green ) and oxford ( blue ) for years 2002 to 2006 and middle curves ( shifted down by a factor of 10 ) to the university networks of glasgow , cambridge , oxford , edinburgh , ucl , manchester , leeds , bristol and birkbeck for year 2006 with between 14 and 31 .lower curve ( shifted down by a factor of 100 ) corresponds to the matrix of wikipedia with .the thick black line is ._ right panel : _ rescaled pagerank versus rescaled rank index for and for the same university networks as in the left panel ( upper and middle curves , the latter shifted down and left by a factor of 10 ) .the lower curve ( shifted down and left by a factor of 100 ) shows the rescaled cheirank of wikipedia versus with .the thick black line corresponds to a power law with exponent ., scaledwidth=70.0% ] for certain university networks , cambridge 2002 , 2003 and 2005 and leeds 2006 , there is a specific complication .indeed , the am ( with ) provides a maximal core space eigenvalue _ numerically _ equal to 1 , which should not be possible .a more careful evaluation by a different algorithm , based on the power method ( iterating with a subsequent core space projection ) and measuring the loss of probability at each iteration , shows that this eigenvalue is indeed very close but still _ smaller _ than 1 .for the three cases of cambridge we find and for leeds 2006 : ( see details in section 5 ) .the corresponding eigenvectors are exponentially localized on a small number of nodes ( about 110 nodes for cambridge and 40 nodes for leeds 2006 ) being very small ( for cambridge and for leeds 2006 ) on other nodes .these quasi - subspaces with small number of nodes belong _ technically _ to the core space , since they are eventually linked to a dangling node , but when starting from the maximal node of these eigenvectors it takes a considerable number of iterations with a strong reduction of probability to reach the dangling node . since their eigenvalue is very close to 1 , these quasi - subspaces also contribute to the pagerank at in the same way as the exact invariant subspaces .however , since the size of these quasi - subspaces is small they do not change the overall picture and we can still identify a region of large pagerank with subspace or quasi - subspace nodes and vanishing pagerank for the other core space nodes . for most of the other universities and also the matrix ofwikipedia we have ( and for cambridge 2004 ) .our results show that for the pagerank vector converges to a universal distribution determined by the invariant subspaces ( with ) .the fraction of nodes which belong to these subspaces varies greatly depending on the network , but the distribution of the subspace sizes is described by a universal function that reminds the properties of critical percolation clusters .when decreases from , the pagerank undergoes a transition which allows to properly rank all nodes .this process is controlled by the largest eigenvalues of the core matrix , which are strictly below but can be extremely close to it .their distance from sets the scale of the transition , and the associated eigenvectors of control the new ranking of nodes .although at the eigenspace for eigenvalue can be very large , for sufficiently larger in norm than the eigenvalues of , the pagerank remains fixed when , in a way reminiscent of degenerate perturbation theory in quantum mechanics .our highly accurate numerical method based on alternations of arnoldi iterations and direct iterations of matrix enables to determine the correct pagerank even where the scale of this transition is extremely small ( ) and the matrix size is very large ( up to several millions ) .the very slow convergence of the power method in this regime is reminiscent of very long equilibration times in certain physical systems ( e.g. spin glasses ) , and thus arnoldi iterations can be viewed as a certain kind of simulated annealing process which enables to select the correct eigenvector among many others with very close eigenvalues .the pagerank in this regime of shows universal properties being different from the usual pagerank at , with a different statistical distribution .this can be used to refine search and ranking in complex networks and hidden communities extraction .finally we note that usually in quantum physics one deals with unitary matrices with a real spectrum . in the case of directed markov chainswe naturally obtain a complex spectrum . in physical quantum systemsa complex spectrum appears in positive quantum maps , problems of decoherence and quantum measurements and random matrix theory of quantum chaotic scattering .thus we hope that a cross - fertilization between complex matrices and directed network will highlight in a new way the properties of complex networks .we thank calmip for supercomputer access and a.d.chepelianskii for help in data collection from .99 brin s. and page l. 1998 _ computer networks and isdn systems _ * 30 * , 107 . langville a m and meyer c d 2006 _ google s pagerank and beyond : the science of search engine rankings _( princeton : princeton university press ) .redner s. 2005 _ phys . today_ * 58 * , 49 .radicchi f. , fortunato s. , markines b. , and vespignani a. 2009 _ phys .e _ * 80 * , 056103 .west j.d . ,bergstrom t.c . , andbergstrom c. t. 2010 _ coll ._ * 71 * , 236 ; ` http://www.eigenfactor.org/ ` radicchi f. 2011 _ plos one _ * 6 * , e17249 .avrachenkov k. , donato d. and litvak n. ( eds . ) 2009 _ algorithms and models for the web - graph : proc .of 6th international workshop , waw 2009 barcelona _ _ lect .notes comp .sci . _ * 5427 * ( 2009 ) .brin m. and stuck g. 2002 _ introduction to dynamical systems _ , ( cambridge : cambridge univ .donato d. , laura l. , leonardi s. and millozzi s. 2005 _ eur .j. b _ * 38 * , 239 ; pandurangan g. , raghavan p. and upfal e. 2005 _ internet math ._ * 3 * , 1 .litvak n. , scheinhardt w. r. w. and volkovich y. 2008 _ lect .notes comp .sci . _ * 4936 * , 72 .chepelianskii a. d. 2010 _ towards physical laws for software architecture _ , arxiv:1003.5455[cs.se ] .zhirov a. o. , zhirov o. v. and shepelyansky d. l. 2010 _ eur .j. b _ * 77 * , 523 .` academic web link database project ` ` http://cybermetrics.wlv.ac.uk/database/ ` serra - capizzano s. 2005 _ siam j. matrix anal .* 27 * , 305 .avrachenkov k. , litvak n. and pham k. s. 2007 _ lect .notes comp ._ * 4863 * , 16 ; boldi p. , santini m. and vigna s. 2009 _ acm trans . on inf_ * 27 * , 19 .giraud o. , georgeot b. and shepelyansky d. l. 2009 _ phys .e _ * 80 * , 026107 .stewart g. w. 2001 _ matrix algorithms volume ii : eigensystems _ , ( siam ) .golub g. h. and greif c. 2006 _ bit num . math ._ * 46 * , 759 .frahm k. m. and shepelyansky d. l. 2010 _ eur .j. b _ * 76 * , 57 . in certain invariant subspaces , there are nodes with no ingoing links from the same subspace , which do not contribute to the pagerank for . except for wikipedia ( cheirank ) , they are very few in our data and their effect is not visible in the figures .schwartz n. , cohen r. , ben - avraham d. , barabasi a .-havlin s. 2002 _ phys .e _ * 66 * , 015104(r ) .bruzda w. , cappellini v. , sommers h .- j ., zyczkowski k. 2009 _ phys .a _ * 373 * , 320 .bruzda w. , smaczynski , m. , cappellini v. , sommers h .- j . and zyczkowski k. 2010 _ phys . rev .e _ * 81 * , 066209 .guhr t. , mller - groeling a. and weidenmller h.a .1998 _ phys .rep . _ * 299 * , 189 .
the pagerank algorithm enables to rank the nodes of a network through a specific eigenvector of the google matrix , using a damping parameter ,1[$ ] . using extensive numerical simulations of large web networks , with a special accent on british university networks , we determine numerically and analytically the universal features of pagerank vector at its emergence when . the whole network can be divided into a core part and a group of invariant subspaces . for the pagerank converges to a universal power law distribution on the invariant subspaces whose size distribution also follows a universal power law . the convergence of pagerank at is controlled by eigenvalues of the core part of the google matrix which are extremely close to unity leading to large relaxation times as for example in spin glasses .
one of the most puzzling features of quantum mechanics is the violation of so - called bell - type inequalities representing a cornerstone of our present understanding of quantum probability theory . as pointed out by john bell such a violation , as predicted by quantum mechanics , requires a radical reconsideration of basic physical principles like the assumption of local realism .however , bell - type inequalities have already a long tradition dating back to george boole s work on `` conditions of possible experience '' , dealing with the question of necessary and sufficient conditions on probabilities of logically interconnected events .take for example the statements : `` the probability of rain in vxj is about '' and `` the probability of rain in vienna is '' .nobody would believe that the joint probability of rain in both places could be just the claim that the joint probability is very much lower than the single probabilities is apparently counterintuitive .the question remains : which numbers could be considered reasonable and consistent ?boole s requirements on the ( joint ) probabilities are expressed by certain equations or inequalities relating those ( joint ) probabilities . since bell s investigations into bounds on classical probabilities and their relation to quantum mechanical predictions , similar inequalities for particular physical setups have been discussed in great number and detail ( see for example refs .furthermore , violations of bell - type inequalities , as predicted by quantum mechanics , have been experimentally verified in different areas of physics to a very good degree of accuracy . however , whereas these bounds are interesting for an inspection of the violations of classical probabilities by quantum probabilities , the issue of the validity of quantum probabilities and their experimental verification is completely different .recently , bovino _ et al . _ conducted an experiment based on numerical studies by the current authors and triggered by a proposal of cabello to verify bounds on quantum probabilities depending on a particular choice of measurements . in what followswe shall present analytical as well as numerical studies on such quantum bounds allowing for further experimental tests of different kinds of bell - type inequalities .at first we shall start from a geometrical derivation of bounds on classical probabilities given by linear inequalities in terms of correlation polytopes .considering an arbitrary number of classical events one can assign to each event a certain probability and probabilities for the joint events .these probability values can be collected to form the the components of a vector , where each can take values in the interval ] when measuring their spin / polarization in coincidence along the directions and , respectively .the global limit for a quantum violation of this inequality is ; quantum theory does not allow a higher value , no matter which state and which measurement directions are chosen .however , in principle , the four terms on the left hand side of eq . ( [ chsh ] ) could be set such that a value of can be obtained by appropriate choices of for the correlation functions .popescu and rohrlich investigated the case where `` physical locality '' is assumed without referring to a specific physical model ( such as quantum mechanics ) , whether realistic or not . in this context , `` physical locality '' means that the marginal probabilities for measuring an observable on one side should be independent of the observable measured on the other side , which is a natural assumption for a lorentz invariant theory .the maximal value of the left hand side of eq .( [ chsh ] ) has been shown to be as well , which is beyond the quantum bound and we can conclude that quantum mechanics does not exploit the whole range of violations possible in a theory conforming to relativistic causality . still , in our opinion, the nagging question remains why quantum mechanics does not violate the inequality to a higher degree . in what follows, we will restrict our attention to the simpler task to explore the quantum bounds on violations of bell - type inqualities for particular given measurement directions and arbitrary states .it turns out that the equations for the analytic description of the quantum bounds can be derived by solving an eigenvalue problem .intuitively it can not be expected that it is feasible to achieve a maximal violation of some inequality for any set of measurements just by choosing a single appropriate state .the quantum mechanical description of the physical scenario discussed above involves spin measurements represented by projection operators with , denoting the direction of measurement in the plane , and standing for the two - dimensional identity matrix . for an even more general description we would have to take all possible two - dimensional projection operators into account , corresponding to measurements in arbitrary directions .as this generalization is straightforward and does not lead to any more insight , we will work with this restricted set of measurements parameterized in eq .( [ eq : proj ] ) . acts on one of the two particles .this implies that we have to choose a tensor product of two hilbert spaces to represent the state vectors corresponding to possible state configurations ; i. e. , .the representation of a single - particle measurement in is then for a measurement on the particle emitted in the negative -direction ( ) , or in the positive -direction ( ) , respectively .two - particle measurements are implemented by applying on both and ; i. e. , corresponding to a measurement of the joint probabilities .this setup can easily be enlarged to systems comprising more than two particles by the tensor product of the appropriate hilbert spaces , but for the sake of simplicity we will restrict ourselves to bipartite systems . the general method for obtaining the quantum violations of bell - type inequalitiesis then to replace the classical probabilities by projection operators in eqs .( [ singleparticlemeasurement],[twoparticlemeasurement ] ) in a certain bell - type inequality to obtain the _ bell - operator _ , which is a sum of projection operators . in the case of the ch inequality one obtains in a second step one calculates the quantum mechanical expectation values by ,\ ] ] where is a positive definite , hermitian and normalized density operator denoting the state of the system .for some and set of angles one obtains a violation of a classical inequality . in general the bell - operatorscan be written in the form with real valued coefficients . here is the number of particles involved and the are either projection operators denoting a measurement on particle or the identity when no measurement is performed on the -th particle . since and for arbitrary selfadjoint operators , the bell - operator is also self - adjoint with real eigenvalues .however , the eigenvalues of can not be deduced from the eigenvalues of the constituents in the sum in eq .( [ eq : belloperator ] ) since these are not commuting in general and therefore are not diagonalizable simultaneously .one can make use of the _ min - max principle _ , stating that the bound of a self - adjoint operator is equal to the maximum of the absolute values of its eigenvalues .thus , the problem of finding the maximal violation possible for a particular choice of measurements can be solved via an eigenvalue problem .the maximal eigenvalue corresponds to the maximal violation and the associated eigenstates are the multi - partite states which yield a maximum violation of the classical bounds under the given experimental ( parameter ) setup . for a demonstration of the method let us start with the trivial setup of two particles measured along a single ( but not necessarily identical ) direction on either side .the vertices are for and thus , , , ; the corresponding face ( bell - type ) inequalities of the polytope spanned by the four vertices are given by , , and .the classical probabilities have to be substituted by the quantum ones ; i.e. , \otimes 1_2 , \\ p_2 & \rightarrow & q_2 ( \theta ) = 1_2 \otimes{ \frac{1}{2}}\left[1_2 + { \bf \sigma } ( \theta ) \right ] , \\ p_{12}&\rightarrow & q_{12 } ( \theta , \theta ' ) = { \frac{1}{2}}\left[1_2 + { \bf \sigma } ( \theta ) \right ] \otimes { \frac{1}{2}}\left[1_2 + { \bf \sigma } ( \theta ' ) \right ] . \end{array } \label{2004-qbounds - e2}\ ] ]it follows that the self - adjoint transformation corresponding to the classical bell - type inequality ( ) is given by the eigenvalues of are and , irrespective of , the maximal value of predicted by the min - max principle does not exceed the classical bound 1 .now we shall enumerate analytical quantum bounds for the more interesting cases comprising two or three distinct measurement directions on either side yielding the quantum equivalents of the clauser - horne ( ch ) inequality , as well as of the inequalities discussed in . for two measurement directions per side ,we obtain the operator based on the ch - inequality [ eq .( [ eq : chineq ] ) ] upon substitution of the classical probabilities by projection operators : the eigenvalues of the self - adjoint transformation in ( [ 2004-qbounds - e4 ] ) are yielding the maximal bound .the eigenstates corresponding to maximal violating eigenstates are maximally entangled for general measurement angles lying in the -plane .the numerical simulation of the bounds of the ch - inequality is based on the generation of arbitrary bipartite density matrices ; i. e. , hermitian positive matrices with trace equal to one . since one can write a hermitian positive matrix as the square of a self - adjoint matrix , .the normalized matrix } ] .in addition , the well - known maximal violation for the singlet - state at and is drawn . the extension to _ three _ measurement operators for each particle merely yields one additional non - equivalent inequality ( with respect to symmetries ) among the 684 inequalities representing the faces of the associated classical correlation polytope . the associated operator for symmetric measurement directions is given by in the bell basis with and .in this basis , can be decomposed into a direct sum of a one - dimensional and a three - dimensional matrix , thus simplifying the calculations of the real eigenvalues . by using the cardano method ,these can be calculated to be -\frac{b}{3}. \label{2004-qbounds - o33ev}\end{aligned}\ ] ] here , and where .( for convenience we have omitted the dependencies on . ) in figure [ fig:2004-qbounds - f1 ] , the eigenvalues are plotted as functions of the parameter . in dependence of the relative angle .,width=340 ] the maximum violation of is obtained for with the eigenvector corresponding to is maximally entangled , but in contrast to the ch - inequality , this is in general not the case for eigenstates corresponding to the maximal eigenvalue at .the analytical quantum bound of the ch - operator has been enumerated by cabello as well as by the current authors and experimentally verified by bovino _. _ using polarization - entangled photon pairs .the ansatz of cabello for the experimental realization made use of the fact that the eigenstates leading to maximal violations are maximally entangled .thus when applying a unitary transformation of the form onto an initial state , one obtains all maximally violating states for different values .however , in the case of , this scheme has to be extended , since the maximal violating states are not maximally entangled in general .such states can not be created from maximal entangled initial states by a local unitary operation , since such a factorized transformation does not change the degree of entanglement . to obtain states constituting the quantum bounds ,one has to apply unitary transformations to the initial state comprising also non - local operations which can not be written as a tensor product of two unitary single - particle operators .a simplification for an experimental verification of the quantum bounds of bell - type inequalities is due to the fact that maximal violating states are pure .therefore , it is sufficient to generate initial states with variable degree of entanglement . utilizing the schmidt - decomposition , which is always possible for a bipartite state , one can write any pure state in the form latexmath:[$|{\psi } \rangle= \sum_k \lambda_k are orthonormal basis states for particle and , respectively , and .the weights of the s are a measure of the degree of entanglement comprising the special cases where for a maximally entangled state and ( or vice versa ) for a separable state . having a source producing such states in a particular basis one can obtain all other pure states by applying a local unitary operation .appropriate photon sources have been suggested for example by white __ and barbieri _ et al ._ and could therefore be used to trace the bounds on arbitrary bipartite bell - type inequalities in the same manner as in the experiment of bovino _ et al .in conclusion we have shown how to obtain analytically the quantum bounds on bell - type inequalities for a particular choice of measurement operators .we have also presented a numerical simulation for obtaining these bounds for the ch - inequality .we have provided a quantitative analysis and derived the exact quantum bounds for bipartite inequalities involving two or three measurements per site .the generalization to an arbitrary number of measurement parameters is straightforward as the dimensionality of the eigenvalue problem remains constant . for more than two particles the dimension of the matrix associated with a bell - type operator increases exponentially .however , one may conjecture that such matrices can be decomposed into a direct sum of lower dimensional matrices . in the context of this conferencewe also believe that the analytic expressions of the quantum bounds could serve as consistency criteria of mathematical models proposed to show that a violation of bell - type inequalities does not necessarily imply the absence of a possible local - realistic theory from the logical point of view .it is claimed that violations can be achieved without abandoning a local and realistic position assuming for example time - dependencies of the random parameters , or `` chameleon '' effects .still , any appropriate model has to be in accordance with quantum mechanics not only qualitatively , but also quantitatively , and hence should reproduce also the `` fine structure '' of the quantum bounds as discussed above .finally , although there is no theoretical evidence for a stronger - than - quantum violation whatsoever , its mere possibility justifies the sampling of the fine structure of the quantum bounds from the experimental as well as the theoretical point of view in order to understand and verify the restriction imposed by quantum theory .s. f. acknowledges the support of the austrian science foundation , project nr .1514 , and the support of prof .rauch rendering investigations in such theoretical aspects of quantum mechanics possible .
bell - type inequalities and violations thereof reveal the fundamental differences between standard probability theory and its quantum counterpart . in the course of previous investigations ultimate bounds on quantum mechanical violations have been found . for example , tsirelson s bound constitutes a global upper limit for quantum violations of the clauser - horne - shimony - holt ( chsh ) and the clauser - horne ( ch ) inequalities . here we investigate a method for calculating the precise quantum bounds on arbitrary bell - type inequalities by solving the eigenvalue problem for the operator associated with each bell - type inequality . thereby , we use the min - max principle to calculate the norm of these self - adjoint operators from the maximal eigenvalue yielding the upper bound for a particular set of measurement parameters . the eigenvectors corresponding to the maximal eigenvalues provide the quantum state for which a bell - type inequality is maximally violated .
nowadays various estimates of lyapunov dimension of attractors of generalized lorenz systems are actively developed .one particular case of system is the yang system : where are positive , is arbitrary real number . in an estimation of lyapunov dimension for the generalized lorenz systemwas obtained ( see also remarks in ) , which for the case ( i.e. for the yang system ) is as follows : [ errortheorem ] let be an invariant compact set of the yang system .assume , , , , and if one of the following conditions holds 1 . , 2 . , , then . in our workwe extend the domain of parameters , where the above estimation is valid .consider an autonomous differential equation where is smooth .suppose that solution exist for all and consider corresponding linearized system along the solution : where \ ] ] is the jacobian matrix . by a linear variable change with a nonsingular -matrix systemis transformed into consider the linearization along the corresponding solution : here the jacobian matrix is as follows suppose that are eigenvalues of the matrix [ theorem : th1 ] given an integer ] , suppose that there are a continuously differentiable scalar function and a nonsingular matrix such that then .here is the derivative of with respect to the vector field : the introduction of the matrix can be regarded as a change of the space metric .[ theorem : th2 ] assume that there are a continuously differentiable scalar function and a nonsingular matrix such that then any solution of system bounded on tends to an equilibrium as .thus , if holds , then the global attractor of system coincides with its stationary set .we can prove the following result for the yang system .[ main ] 1 .assume and the following inequalities , are satisfied .then any bounded on solution of system tends to a certain equilibrium as .2 . assume and .then any bounded on solution of system tends to a certain equilibrium as .3 . assume and there are two distinct real roots of equation such that .+ in this case 1 .if then any bounded on solution of system tends to a certain equilibrium as .2 . if then where is bounded invariant set of system . for proving theorem [ main ] we use theorems [ theorem : th1 ] and [ theorem : th2 ] with the matrix of the form and the function ^{\frac{1}{2}}},\ ] ] where the parameters are variable .exact value of the lyapunov dimension is obtained by the comparison of estimate and the lyapunov dimension at the zero equilibrium point .we consider classical chaotic parameters of the yang system : and . in the first case inequalityis as follows and the greater root of corresponding equation is equal in the second case inequality is as follows and the greater root of is equal thus , classical chaotic parameters satisfy the requirement of theorem [ main ] and the formula is valid .consider the tigan system ( t - system ) : by the transformation one has so the t - system can be transformed to the yang system with the following parameters .both systems were independently considered in 2008 year . but a particular case of the t - system was considered in 2004 .the t - system has classical chaotic parameters .conditions of theorem [ main ] are valid in the case and takes the form : \(1 ) parameter , \(2 ) inequality is transformed to and , thus , is valid , \(3 ) the roots of the corresponding equation are and , and , thus , the greater root is positive . therefore , if is a bounded invariant set of system with the parameters , then yang system by the transformation takes the form for the yang system becomes linear and its dynamics has minor interest .thus , without loss of generality , one can assume that .below we compare the domains of parameters in theorem [ main ] and theorem [ errortheorem ] for .the points and are the parameters which correspond the classical chaotic self - excited attractors in the yang system , classical chaotic self - excited attractor in the t - system , parameters , for which we have also found chaotic self - excited attractors by numerical analysis .g. a. leonov , n. v. kuznetsov , on differences and similarities in the analysis of lorenz , chen , and lu systems , applied mathematics and computation 256 ( 2015 ) 334343 .http://dx.doi.org/10.1016/j.amc.2014.12.132 [ ] .y. chen , q. yang , the nonequivalence and dimension formula for attractors of lorenz - type systems , international journal of bifurcation and chaos 23 ( 12 ) , art .http://dx.doi.org/10.1142/s0218127413502003 [ ] .g. tigan , bifurcation and stability in a system derived from the lorenz system , in : proceedings of the 3rd . international colloquium `` mathematics in engineering and numerical physics '' , october 7 - 9 , 2004 : [ mathematics sections ] , bsg proceedings , geometry balkan press , 2004 , pp .265272 .g. a. leonov , n. v. kuznetsov , hidden attractors in dynamical systems . from hidden oscillations in hilbert - kolmogorov ,aizerman , and kalman problems to hidden chaotic attractors in chua circuits , international journal of bifurcation and chaos 23 ( 1 ) , art .http://dx.doi.org/10.1142/s0218127413300024 [ ] .g. leonov , n. kuznetsov , t. mokaev , homoclinic orbits , and self - excited and hidden attractors in a lorenz - like system describing convective fluid motion , eur .. j. special topics 224 ( 8) ( 2015 ) 14211458 . http://dx.doi.org/10.1140/epjst/e2015-02470-3 [ ] .g. leonov , n. kuznetsov , t. mokaev , homoclinic orbit and hidden attractor in the lorenz - like system describing the fluid convection motion in the rotating cavity , communications in nonlinear science and numerical simulation 28 ( doi:10.1016/j.cnsns.2015.04.007 ) ( 2015 ) 166174 .m. shahzad , v .- t .pham , m. ahmad , s. jafari , f. hadaeghi , synchronization and circuit design of a chaotic system with coexisting hidden attractors , european physical journal : special topics 224 ( 8) ( 2015 ) 16371652 .z. zhusubaliyev , e. mosekilde , a. churilov , a. medvedev , multistability and hidden attractors in an impulsive goodwin oscillator with time delay , european physical journal : special topics 224 ( 8) ( 2015 ) 15191539 .v. semenov , i. korneev , p. arinushkin , g. strelkova , t. vadivasova , v. anishchenko , numerical and experimental studies of attractors in memristor - based chua s oscillator with a line of equilibria .noise - induced effects , european physical journal : special topics 224 ( 8) ( 2015 ) 15531561 .
in latter days the technique of attractors dimension estimate of lorenz type systems is actively developed . in this work the lyapunov dimension of attractors of the tigan and yang systems is estimated . lorenz - like system , lorenz system , yang system , tigan system , kaplan - yorke dimension , lyapunov dimension , lyapunov exponents .
the semigeostrophic flow equations , which were derived by b. j. hoskins , is used in meteorology to model slowly varying flows constrained by rotation and stratification .they can be considered as an approximation of the euler equations and are thought to be an efficient model to describe front formation ( cf . ) . under certain assumptions and in some appropriately chosen curvecoordinates ( called ` dual space ' , see section [ sec-2 ] ) , they can be formulated as the following coupled system consisting of the fully nonlinear monge - ampre equation and the transport equation : ,\\ \frac{{\partial}\alpha}{{\partial}t}+{{\mbox{\rm div\,}}}(\mathbf{v}\alpha)&=0{\hspace{1cm}}&&\text{in } \mathbb{r}^3\times ( 0,t ] , \label{intro2}\\ \alpha(x,0)&=\alpha_0{\hspace{1cm}}&&\text{in } \mathbb{r}^3\times \{t=0\ } , \label{intro3 } \\\nabla \psi^*&\subset \omega , \label{intro3a}\end{aligned}\ ] ] and here , is a bounded domain , is the density of a probability measure on , and denotes the legendre transform of a convex function . for any ,we note that none of the variables , and in the system is an original primitive variable appearing in the euler equations .however , all primitive variables can be conveniently recovered from these non - physical variables ( see section [ sec-2 ] for the details ) . in this paper ,our goal is to numerically approximate the solution of . by inspecting the above system , one easily observes that there are three clear difficulties for achieving the goal .first , the equations are posed over an unbounded domain , which makes numerically solving the system infeasible .second , the -equation is the fully nonlinear monge - ampre equation .numerically , little progress has been made in approximating second order fully nonlinear pdes such as the monge - ampre equation .third , equation imposes a nonstandard constraint on the solution , which often is called the second kind boundary condition for in the pde community ( cf . ) . as a first step to approximate the solution of the above system, we must solve over a finite domain , , which then calls for the use of artificial boundary condition techniques .for the second difficulty , we recall that a main obstacle is the fact that weak solutions ( called viscosity solutions ) for second order nonlinear pdes are non - variational .this poses a daunting challenge for galerkin type numerical methods such as finite element , spectral element , and discontinuous galerkin methods , which are all based on variational formulations of pdes . to overcome the above difficulty, recently we introduced a new approach in , called the vanishing moment method in order to approximate viscosity solutions of fully nonlinear second order pdes .this approach gives rise a new notion of weak solutions , called moment solutions , for fully nonlinear second order pdes .furthermore , the vanishing moment method is constructive , so practical and convergent numerical methods can be developed based on the approach for computing viscosity solutions of fully nonlinear second order pdes .the main idea of the vanishing moment method is to approximate a fully nonlinear second order pde by a quasilinear higher order pde . in this paper , we apply the methodology of the vanishing moment method , and approximate by the following fourth order quasi - linear system : ,\\ \frac{{\partial}\alpha^{\varepsilon}}{{\partial}t}+{{\mbox{\rm div\,}}}(\mathbf{v}^{\varepsilon}\alpha^{\varepsilon } ) & = 0{\hspace{1cm}}&&\text{in } u\times ( 0,t ] , \label{intro5}\\ \alpha^{\varepsilon}(x,0)&=\alpha_0(x){\hspace{1cm}}&&\text{in } { { \mathbb{r}}}^3\times \{t=0\ } , \label{intro6}\end{aligned}\ ] ] where it is easy to see that is underdetermined , so extra constraints are required in order to ensure uniqueness . to this end , we impose the following boundary conditions and constraint to the above system : ,\\ { \frac{\partial \delta \psi^{\varepsilon}}{\partial \nu}}&={\varepsilon}{\hspace{1cm}}&&\text{on } \partial u\times ( 0,t ] , \label{intro8}\\ \int_u \psi^{\varepsilon}dx&=0{\hspace{1cm}}&&t\in ( 0,t ] , \label{intro9}\end{aligned}\ ] ] where denotes the unit outward normal to .we remark that the choice of intends to minimize the boundary layer due to the introduction of the singular perturbation term in ( see for more discussions ) .boundary condition is used to minimize the reflection " due to the introduction of the finite computational domain .it can be regarded as a simple radiation boundary condition .an additional consequence of is that it also effectively overcomes the third difficulty , which is caused by the nonstandard constraint , for solving system . clearly , is purely a mathematical technique for selecting a unique function from a class of functions differing from each other by an additive constant .the specific goal of this paper is to formulate and analyze a modified characteristic finite element method for problem . the proposed method approximates the elliptic equation for by conforming finite element methods ( cf . ) and discretizes the transport equation for by a modified characteristic method due to douglas and russell .we are particularly interested in obtaining error estimates that show explicit dependence on for the proposed numerical method .the remainder of this paper is organized as follows . in section [ sec-2 ] , we introduce the semigeostrophic flow equations and show how they can be formulated as the monge - ampre / transport system . in section [ sec-3 ] , we apply the methodology of the vanishing moment method to approximate via , prove some properties of this approximation , and also state certain assumptions about this approximation .we then formulate our modified characteristic finite element method to numerically compute the solution of . section [ sec-4 ] mirrors the analysis found in where we analyze the numerical solution of the monge - ampre equation under small perturbations of the data . section [ sec-4 ] is of independent interests in itself , but the main results will prove to be crucial in the next section . in section [ sec-5 ] , under certain mesh and time stepping constraints , we establish optimal order error estimates for the proposed modified characteristic finite element method .the main idea of the proof is to use the results of section [ sec-4 ] and an inductive argument .finally , in section [ sec-6 ] , we provide numerical tests to validate the theoretical results of the paper .standard space notation is adopted in this paper , we refer to for their exact definitions .in particular , and denote the -inner products on and , respectively . is used to denote a generic positive constant which is independent of and mesh parameters and .for the reader s convenience and to provide necessary background , we shall first give a concise derivation of the hoskins semigeostrophic flow equations and then explain how the hoskins model is reformulated as a coupled monge - ampre / transport system .although our derivation essentially follows those of , we shall make an effort to streamline the ideas and key steps in a way which we thought should be more accessible to the numerical analysis community .let denote a bounded domain of the _ troposphere _ in the atmosphere .it is well known that if fluids are assumed to be incompressible , their dynamics in such a domain are governed by the following incompressible boussinesq equations which are a version of the incompressible euler equations : ,\\ \frac{d\theta}{dt } & = 0 & & \qquad \text{in } \omega\times ( 0,t ] , \label{euler2 } \\ { { \mbox{\rm div\,}}}{{\mathbf{u}}}&=0 & & \qquad \text{in } \omega\times ( 0,t ] , \label{euler3 } \\\mathbf{u}&=\mathbf{0 } & & \qquad\text{on } \partial\omega\times ( 0,t ] , \label{euler4}\end{aligned}\ ] ] where , is the velocity field , is the pressure , either denotes the temperature ( in the case of atmosphere ) or the density ( in the case of ocean ) of the fluid in question . is a reference value of .also denotes the material derivative .recall that .finally , , assumed to be a positive constant , is known as _ the coriolis parameter _ , and is the gravitational acceleration constant .we note that the term is the so - called coriolis force which is an artifact of the earth s rotation ( cf . ) . ignoring the ( low order ) material derivative term inwe get where equation is known as _ the geostrophic balance _ , which describes the balance between the pressure gradient force and the coriolis force in the horizontal directions .equation is known as _ the hydrostatic balance _ in the literature , which describes the balance between the pressure gradient force and the gravitational force in the vertical direction .define which are often called _ the geostrophic wind _ and _ ageostrophic wind _, respectively .the geostrophic and hydrostatic balances give very simple relations between the pressure field and the velocity field .however , the dynamics of the fluids are missing in the description . to overcome this limitation , j. b. hoskins proposed so - called semigeostrophic approximation which is based on replacing the material derivative term by in .this then leads to the following semigeostrophic flow equations ( in the primitive variables ) : ,\\ \frac{{\partial}p}{{\partial}x_3 } & = -\frac{\theta}{\theta_0 } g & & \qquad \text{in } \omega\times ( 0,t ] , \label{semigeoapprox1 } \\\frac{d\theta}{dt } & = 0 & & \qquad \text{in } \omega\times ( 0,t ] , \label{semigeoapprox2 } \\ { { \mbox{\rm div\,}}}{{\mathbf{u}}}&=0 & & \qquad \text{in } \omega\times ( 0,t ] , \label{semigeoapprox3 } \\ { { \mathbf{u}}}&= 0 & & \qquad \text{on } { \partial}\omega\times ( 0,t ] .\label{semigeoapprox4}\end{aligned}\ ] ] it is easy to see that after substituting , is an evolution equation for .there are no explicit dynamic equations for in the above semigeostrophic flow model . also , by the definition of the material derivative , .we note that the full velocity appears in the last term .should be replaced by in the material derivative , the resulting model is known as _ the quasi - geostrophic flow equations _ ( cf . ) . due to the peculiar structure of the semigeostrophic flow equations , it is difficult to analyze and to numerically solve the equations .the first successful analytical approach is the one based on the fully nonlinear reformulation , which was first proposed in and was further developed in ( see for a different approach ) .the main idea of the reformulation is to use time - dependent curved coordinates so the resulting system becomes partially decoupled .apparently , the trade - off is the presence of stronger nonlinearity in the new formulation .the derivation of the fully nonlinear reformulation starts with introducing the so - called _ geopotential _ and _ geostrophic transformation _ a direct calculation verifies that consequently , can be rewritten compactly as where for any , let denote the fluid particle trajectory originating from , i.e. , define the composite function then we have from since the incompressibility assumption implies is volume preserving , which is equivalent to to summarize , we have reduced into . it is easy to see that is not unique because one has a freedom in choosing the geopotential .however , cullen , norbury , and purser ( also see ) discovered the so - called _ cullen - norbury - purser principle _ which says that must minimize the geostrophic energy at each time .a consequence of this minimum energy principle is that the geopotential must be a convex function . using the assumption that is convex and brenier s polar factorization theorem , brenier and benamou proved existence of such a convex function and a measure preserving mapping which solves . to relate with , , and ,let be the image measure of the lebesgue measure by , that is we note that the image measure is the push - forward of by , and is the density of with respect to the lebesgue measure .assume that is sufficiently regular , it follows from and that using a change of variable on the right and the definition of on the left we get where denotes the legendre transform of , that is , hence which yields .for convex function , by a property of the legendre transform we have .hence , therefore , holds . finally , for any ;\mathbb{r}^3) ] and ; l^p(b_r(0 ) ) ) & & \quad \text{nonnegative},\\ & \psi\in l^\infty([0,t ] ; w^{1,\infty}(\omega ) ) & & \quad \text{convex in physical space},\\ & \psi^*\in l^\infty([0,t ] ; w^{1,\infty}({{\mathbb{r}}}^3 ) & & \quad \text{convex in dual space}.\end{aligned}\ ] ] ( a ) . the above compact support result for justifies our approach of solving the original infinite domain problem on a truncated computational domain , in particular , if is chosen large enough so that . ( b ) .since and are not physical variables , one needs to recover the physical variables and from and .this can be done by the following procedure .first , one constructs the geopotential from its legendre transform .numerically , this can be done by fast inverse legendre transform algorithms .second , one recovers the pressure field from the geopotential using .third , one obtains the geostrophic wind and the full velocity field from the pressure field using .( c ) . recently , loeper generalized the above results to the case where is a global weak probability measure solution of the semigeostrophic equations .( d ) . as a comparison, we recall that two - dimensional incompressible euler equations ( in the vorticity - stream function formulation ) has the form , \\ \frac{{\partial}\omega}{{\partial}t } + { { \mbox{\rm div\,}}}({{\mathbf{u}}}\omega ) & = 0 & & \quad\mbox{in } { \omega}\times(0,t ] , \\ { { \mathbf{u}}}&= ( { \nabla}\phi)^\bot . & & \end{aligned}\ ] ] clearly , the main difference is that -equation above is a linear equation while in is a fully nonlinear equation .we conclude this section by remarking that in the case that the gravity is omitted , then the flow becomes two - dimensional . repeating the derivation of this section and dropping the third component of all vectors, we then obtained a -d semigeostrophic flow model which has exactly the same form as except that the definition of the operator becomes for , and in is replaced by similarly , in should be replaced by in the remaining of this paper we shall consider numerical approximations of both -d and -d models .as pointed out in section [ sec-1 ] , the primary difficulty for analyzing and numerically approximating the semigeostrophic equations is caused by the strong nonlinearity and non - uniqueness of the -equation ( i.e. , monge - ampre equation .the strong nonlinearity makes the equation non - variational , so any galerkin type numerical methods is not directly applicable to the fully nonlinear equation .non - uniqueness is difficult to deal at the discrete level because no effective selection criterion is known in the literature which guarantees picking up the physical solution ( i.e. , the convex solution ) .because of the above difficulties , very little progress was made in the past on developing numerical methods for the monge - ampre equation and other fully nonlinear second order pdes ( cf . ) . very recently , we have developed a new approach , called _ the vanishing moment method _, for solving the monge - ampre equation and other fully nonlinear second order pdes ( cf .our basic idea is to approximate a fully nonlinear second order pde by a singularly perturbed quasilinear fourth order pde . in the case of the monge - ampre equation , we approximate the fully nonlinear second order equation by the following fourth order quasilinear pde accompanied by appropriate boundary conditions. numerics of show that for fixed , converges to the unique convex solution of as .rigorous proof of the convergence in some special cases was carried out in . upon establishing the convergence of the vanishing moment method, one can use various well - established numerical methods ( such as finite element , finite difference , spectral and discontinuous galerkin methods ) to solve the perturbed quasilinear fourth order pde .remarkably , our experiences so far suggest that the vanishing moment method always converges to the physical solution .the success motivates us to apply the vanishing moment methodology to the semigeostrophic model , which leads us to studying problem . since a perturbation term is introduced in , it is also natural to introduce a viscosity " term on the left - hand side of .we believe this should be another viable strategy and will further explore the idea and compare the anticipated new result with that of this paper . since is a quasilinear system , we can define weak solutions for problem in the usual way using integration by parts .[ def3.1 ] a pair of functions is called a weak solution to if they satisfy the following integral identities for almost every : here when and when , and we have used the fact that . for the continuation of the paper , we assume that there exists a unique solution to such that is convex , , and supp for all ] where denotes the cofactor matrix of .as expected , the proof of the above assumptions is extensive and not easy .we do not intend to give a full proof in this paper .however , in the following we shall present a proof for a key assertion , that is , in ] .[ prop1 ] suppose is a regular solution of .assume in , then in ] , let denote the characteristic curve passing through for the transport equation , that is . then the solution at can be written as hence , for all ] .then the cofactor matrix of the gradient matrix of satisfies the following row divergence - free property : where and denote respectively the row and the -entry of . throughout the rest of this section, we assume , set , and assume the following bounds ( compare to those of and ): for we then have the following results .[ mapcenterlem ] there exists a constant such that to ease notation set and . then for any , we use the mean value theorem to get &={\varepsilon}(\delta(i_hu^\varphi,\delta v_h)-(\det(d^2(i_hu^\varphi),v_h)\\ & \hspace{0.4in}+(\tilde{\varphi},v_h)+\langle { \varepsilon}^2,v_h\rangle\\ & = { \varepsilon}(\delta \eta,\delta v_h)+(\det(d^2u^\varphi)-\det(d^2(i_hu^\varphi),v_h ) + ( \delta \varphi , v_h)\\ & = { \varepsilon}(\delta \eta,\delta v_h)+(\upsilon^{\varepsilon}:d^2(u^\varphi - i_hu^\varphi),v_h ) + ( \delta \varphi , v_h),\end{aligned}\ ] ] where for ] we get the proof is complete. [ contractinglem ] there exists such that for , there exists an such that for any there holds from the definitions of and we get for any \\ & = \left(\phi^\varphi(\nabla v_h-\nabla w_h),\nabla z_h\right ) + \left(\text{det}(d^2v_h)-\text{det}(d^2w_h),z_h\right).\end{aligned}\ ] ] adding and subtracting and , where and denote the standard mollifications of and , respectively , yields \\ & \quad = ( \phi^\varphi(t_m)(\nabla v_h-\nabla w_h),\nabla z_h ) + ( \text{det}(d^2v^\mu_h ) -\text{det}(d^2w^\mu_h),z_h)\\ & \hspace{1.75 cm } + ( \text{det}(d^2v_h)-\text{det}(d^2v_h^\mu),z_h)+(\text{det}(d^2w^\mu_h ) -\text{det}(d^2w_h),z_h)\\ & \quad = ( \phi^\varphi ( \nabla v_h-\nabla w_h),\nabla z_h ) + ( \psi_h:(d^2v^\mu_h - d^2w^\mu_h),z_h)\\ & \hspace{1.75 cm } + ( \text{det}(d^2v_h)-\text{det}(d^2v_h^\mu),z_h ) + ( \text{det}(d^2w^\mu_h)-\text{det}(d^2w_h),z_h),\end{aligned}\ ] ] where for ] .we bound as follows : where we used the triangle inequality followed by the inverse inequality and . combining the above two inequalities we get , applying to and setting yield \le c\bigl({\varepsilon}^{-1}+h^{-\frac32}\rho\bigr ) \bigl(h^{\ell-2}\|u^\varphi\|_{h^\ell}+\rho \bigr ) \|v_h - w_h\|_{h^2 } \|z_h\|_{h^2}.\end{aligned}\ ] ] using the coercivity of ] . next , let be the unique solution to the following problem : =(\nabla e^\varphi,\nabla z){\hspace{1cm}}\forall z\in v_0.\end{aligned}\ ] ] the regularity assumption implies that we then have we bound as as follows : where ] denotes the characteristic function of the set \times [ 2.25,3.75]$ ] .we comment that the exact solution of this problem is unknown .we plot the computed and at times , , and , and in figure [ figtest5 ] with parameters , and .as expected , the figure shows that and is convex for all .test 3 : computed ( left ) and ( right ) at ( top ) , ( middle ) , and ( bottom ) .,title="fig:",width=274,height=236 ] test 3 : computed ( left ) and ( right ) at ( top ) , ( middle ) , and ( bottom ) .,title="fig:",width=274,height=236 ] + test 3 : computed ( left ) and ( right ) at ( top ) , ( middle ) , and ( bottom ) .,title="fig:",width=274,height=236 ] test 3 : computed ( left ) and ( right ) at ( top ) , ( middle ) , and ( bottom ) .,title="fig:",width=274,height=236 ] + test 3 : computed ( left ) and ( right ) at ( top ) , ( middle ) , and ( bottom ) .,title="fig:",width=274,height=236 ] test 3 : computed ( left ) and ( right ) at ( top ) , ( middle ) , and ( bottom ) .,title="fig:",width=274,height=236 ] j. douglas , jr . , _ numerical methods for the flow of miscible fluids in porous media _ in numerical methods in coupled systems ( r. w. lewis , p. bettess , and e. hinton eds . ) , john wiley & songs , new york .j. douglas , jr . and t. russell , _ numerical methods for convection - dominated diffusion problems based on combining the method of characteristics with finite element or finite difference procedures _ , siam. j. numer ., 19(5):871 - 885 , 1982 .
this paper develops a fully discrete modified characteristic finite element method for a coupled system consisting of the fully nonlinear monge - ampre equation and a transport equation . the system is the eulerian formulation in the dual space for the b. j. hoskins semigeostrophic flow equations , which are widely used in meteorology to model slowly varying flows constrained by rotation and stratification . to overcome the difficulty caused by the strong nonlinearity , we first formulate ( at the differential level ) a vanishing moment approximation of the semigeostrophic flow equations , a methodology recently proposed by the authors , which involves approximating the fully nonlinear monge - ampre equation by a family of fourth order quasilinear equations . we then construct a fully discrete modified characteristic finite element method for the regularized problem . it is shown that under certain mesh and time stepping constraints , the proposed numerical method converges with an optimal order rate of convergence . in particular , the obtained error bounds show explicit dependence on the regularization parameter . numerical tests are also presented to validate the theoretical results and to gauge the efficiency of the proposed fully discrete modified characteristic finite element method . 13.5pt semigeostrophic flow , fully nonlinear pde , viscosity solution , modified characteristic method , finite element method , error analysis 65m12 , 65m15 , 65m25 , 65m60 ,
in the simplest and commonest case , ` linguistic annotation ' is an orthographic transcription of speech , time - aligned to an audio or video recording .other central examples include morphological analysis , part - of - speech tagging and syntactic bracketing ; phonetic segmentation and labeling ; annotation of disfluencies , prosodic phrasing , intonation , gesture , and discourse structure ; marking of co - reference , ` named entity ' tagging , and sense tagging ; and phrase - level or word - level translations .linguistic annotations may describe texts or recorded signals. our focus will be on the latter , broadly construed to include any kind of audio , video or physiological recording , or any combination of these , for which we will use the cover term ` linguistic signals ' .however , our ideas also apply to the annotation of texts .linguistic annotations have seen increasingly broad use in the scientific study of language , in research and development of language - related technologies , and in language - related applications more broadly , for instance in the entertainment industry .particular cases range from speech databases used in speech recognition or speech synthesis development , to annotated ethnographic materials , to cartoon sound tracks .there have been many independent efforts to provide tools for creating linguistic annotations , to provide general formats for expressing them , and to provide tools for creating , browsing and searching databases containing them see [ ] . within the area of speech and language technology development alone , hundreds of annotated linguistic databaseshave been published in the past fifteen years .while the utility of existing tools , formats and databases is unquestionable , their sheer variety and the lack of standards able to mediate among them is becoming a critical problem .particular bodies of data are created with particular needs in mind , using formats and tools tailored to those needs , based on the resources and practices of the community involved . once created , a linguistic databasemay subsequently be used for a variety of unforeseen purposes , both inside and outside the community that created it .adapting existing software for creation , update , indexing , search and display of ` foreign ' databases typically requires extensive re - engineering . working across a set of databases requires repeated adaptations of this kind .previous attempts to standardize practice in this area have primarily focussed on file formats and on the tags , attributes and values for describing content ( e.g. , ; but see also ) .we contend that file formats and content specifications , though important , are secondary . instead, we focus on the logical structure of linguistic annotations .we demonstrate that , while different existing annotations vary greatly in their form , their logical structure is remarkably consistent . in order to help us think about the form and meaning of annotations, we describe a simple mathematical framework endowed with a practically useful formal structure .this opens up an interesting range of new possibilities for creation , maintenance and search .we claim that essentially all existing annotations can be expressed in this framework .thus , the framework should provide a useful ` interlingua ' for translation among the multiplicity of current annotation formats , and also should permit the development of new tools with broad applicability .before we embark on our survey , a terminological aside is necessary .as far as we are aware , there is no existing cover term for the kinds of transcription , description and analysis that we address here .` transcription ' may refer to the use of ordinary orthography , or a phonetic orthography ; it can plausibly be extended to certain aspects of prosody ( ` intonational transcription ' ) , but not to other kinds of analysis ( morphological , syntactic , rhetorical or discourse structural , semantic , etc ) .one does not talk about a ` syntactic transcription ' , although this is at least as determinate a representation of the speech stream as is a phonetic transcription . `coding ' has been used by social scientists to mean something like ` the assignment of events to stipulated symbolic categories , ' as a generalization of the ordinary language meaning associated with translating words and phrases into references to a shared , secret code book .it would be idiosyncratic and confusing ( though conceptually plausible ) to refer to ordinary orthographic transcription in this way .the term ` markup ' has come to have a specific technical meaning , involving the addition of typographical or structural information to a document . in ordinary language ,` annotation ' means a sort of commentary or explanation ( typically indexed to particular portions of a text ) , or the act of producing such a commentary .like ` markup ' , this term s ordinary meaning plausibly covers the non - transcriptional kinds of linguistic analysis , such as the annotation of syntactic structure or of co - reference .some speech and language engineers have begun to use ` annotation ' in this way , but there is not yet a specific , widely - accepted technical meaning . we feel that it is reasonable to generalize this term to cover the case of transcribing speech , by thinking of ` annotation ' as the provision of any symbolic description of particular portions of a pre - existing linguistic object .if the object is a speech recording , then an ordinary orthographic transcription is certainly a kind of annotation in this sense though it is one in which the amount of critical judgment is small . in sum , ` annotation ' is a reasonable candidate for adoption as the needed cover term .the alternative would be to create a neologism ( ` scription ' ? ) .extension of the existing term ` annotation ' seems preferable to us .in order to justify our claim that essentially all existing linguistic annotations can be expressed in the framework that we propose , we need to discuss a representative set of such annotations .in addition , it will be easiest to understand our proposal if we motivate it , piece by piece , in terms of the logical structures underlying existing annotation practice .this section reviews nine bodies of annotation practice , with a concrete example of each . for each example , we show how to express its various structuring conventions in terms of our ` annotation graphs ' , which are networks consisting of nodes and arcs , decorated with time marks and labels .following the review , we shall discuss some general architectural issues ( [ sec : arch ] ) , give a formal presentation of the ` annotation graph ' concept ( [ sec : algebra ] ) , and describe some indexing methods ( [ sec : indexing ] ) .the paper concludes in [ sec : conclusion ] with an evaluation of the proposed formalism and a discussion of future work .the nine annotation models to be discussed in detail are timit , partitur , childes , the lacito archiving project , ldc broadcast news , ldc telephone speech , nist utf , emu and festival . these models are widely divergent in type and purpose .some , like timit , are associated with a specific database , others , like utf , are associated with a specific linguistic domain ( here conversation ) , while still others , like festival , are associated with a specific application domain ( here , speech synthesis ) .several other systems and formats have been considered in developing our ideas , but will not be discussed in detail .these include switchboard , hcrc maptask , tei , and mate .the switchboard and maptask formats are conversational transcription systems that encode a subset of the information in the ldc and nist formats cited above .the tei guidelines for ` transcriptions of speech ' ( * ? ? ?* p11 ) are also similar in content , though they offer access to a very broad range of representational techniques drawn from other aspects of the tei specification . the tei report sketches or alludes to a correspondingly wide range of possible issues in speech annotation .all of these seem to be encompassed within our proposed framework , but it does not seem appropriate to speculate at much greater length about this , given that this portion of the tei guidelines does not seem to have been used in any published transcriptions to date . as for mate , it is a new sgml- and tei - based standard for dialogue annotation , in the process of being developed. it also appears to fall within the class of annotation systems that our framework covers , but it would be premature to discuss the correspondences in detail . still other models that we are aware of include .note that there are many kinds of linguistic database that are not linguistic annotations in our sense , although they may be connected with linguistic annotations in various ways .one example is a lexical database with pointers to speech recordings along with transcriptions of those recordings ( e.g. hyperlex ) .another example would be collections of information that are not specific to any particular stretch of speech , such as demographic information about speakers .we return to such cases in [ sec : extensions ] .the timit corpus of read speech was designed to provide data for the acquisition of acoustic - phonetic knowledge and to support the development and evaluation of automatic speech recognition systems .timit was the first annotated speech database to be published , and it has been widely used and also republished in several different forms .it is also especially simple and clear in structure . here, we just give one example taken from the timit database .the file contains : 2360 5200 she 5200 9680 had 9680 11077 your 11077 16626 dark 16626 22179 suit 22179 24400 in 24400 30161 greasy 30161 36150 wash 36720 41839 water 41839 44680 all 44680 49066 year this file combines an ordinary string of orthographic words with information about the starting and ending time of each word , measured in audio samples at a sampling rate of 16 khz .the path name tells us that this is training data , from ` dialect region 1 ' , from female speaker ` jsp0 ' , containing words and audio sample numbers .the file contains a corresponding broad phonetic transcription , which begins as follows : 0 2360 h # 2360 3720 sh 3720 5200 iy 5200 6160 hv 61608720 ae 8720 9680 dcl 9680 10173 y 10173 11077 axr 11077 12019 dcl 12019 12257 d 20ex we can interpret each line : as an edge in a directed acyclic graph , where the two times are attributes of nodes and the label is a property of an edge connecting those nodes . the resulting annotation graph for the above fragment is shown in figure [ timit ] .observe that edge labels have the form where the here tells us what kind of label it is .we have used for the ( phonetic transcription ) contents of the .phn file , and for the ( orthographic word ) contents of the .wrd file .the top number for each node is an arbitrary node identifier , while the bottom number is the time reference .we distinguish node identifiers from time references since nodes may lack time references , as we shall see later .the partitur format of the bavarian archive for speech signals is founded on the collective experience of a broad range of german speech database efforts .the aim has been to create ` an open ( that is extensible ) , robust format to represent results from many different research labs in a common source . 'partitur is valuable because it represents a careful attempt to present a common low - level core for all of those independent efforts , similar in spirit to our effort here .in essence , partitur extends and reconceptualizes the timit format to encompass a wide range of annotation types . the partitur format permits time - aligned , multi - tier description of speech signals , along with links between units on different tiers which are independent of the temporal structure .for ease of presentation , the example partitur file will be broken into a number of chunks , and certain details ( such as the header ) will be ignored .the fragment under discussion is from one of the verbmobil corpora at the bavarian archive of speech signals .the kan tier provides the canonical transcription , and introduces a numerical identifier for each word to serve as an anchor for all other material .kan : 0 ja : kan : 1 s2:n kan : 2 dank kan : 3 das+ kan : 4 ve : r@+ kan : 5 ze:6 kan : 6 net tiers for orthographic and transliteration information then reference these anchors as shown below , with orthographic information ( ort ) on the left and transliteration information ( trl ) on the right .ort : 0 ja trl : 0 < a > ort : 1 sch``onen trl : 0 ja , ort : 2 dank trl : 1 sch''onen ort : 3 das trl : 1 < : < # klopfen > ort : 4 w``are trl : 2 dank : > , ort : 5 sehr trl : 3 das ort : 6 nett trl : 4 w''ar trl : 5 sehr trl : 6 nett .higher level structure representing dialogue acts refers to extended intervals using contiguous sequences of anchors , as shown below : das : 0,1,2 @(thank_init ba ) das : 3,4,5,6 @(feedback_acknowledgement ba ) speech data can be referenced using annotation lines containing offset and duration information . as before , links to the kan anchors are also specified ( as the second - last field ) .mau : 4160 1119 0 j mau : 17760 1119 3 a mau : 5280 2239 0 a : mau : 18880 1279 3 s mau : 7520 2399 1 s mau : 20160 959 4 v mau : 9920 1599 1 2 : mau : 21120 639 4 e : mau : 11520 479 1 n mau : 21760 1119 4 6 mau : 12000 479 1 n mau : 22880 1119 5 z mau : 12480 479 -1< nib > mau : 24000 799 5 e : mau : 12960 479 2 d mau : 24800 1119 5 6 mau : 13440 2399 2 a mau : 25920 1279 6n mau : 15840 1279 2 n mau : 27200 1919 6 e mau : 17120 639 3 d mau : 29120 2879 6 t mau : 32000 2559 -1 <p : > the content of the first few words of the ort ( orthography ) , das ( dialog act ) and mau ( phonetic segment ) tiers can apparently be expressed as in figure [ partitur ] .note that we abbreviate the types , using for ort , for das , and for mau .20ex with its extensive user base , tools and documentation , and its coverage of some two dozen languages , the child language data exchange system , or childes , represents the largest scientific as opposed to engineering enterprise involved in our survey .the childes database includes a vast amount of transcript data collected from children and adults who are learning languages .all of the data are transcribed in the so - called ` chat ' format ; a typical instance is provided by this opening fragment of a chat transcription : : boys73.cha : ros ross child , mar mark child , fat brian father , mot mary mother : 4-apr-1984 of ros : 6;3.11 of ros : male of ros : 25-dec-1977 of mar : 4;4.15 of mar : 19-nov-1979 of mar : male : room cleaning * ros : yahoo . *fat : you got a lot more to do # do nt you ? *mar : yeah . * mar : because i m not ready to go to < the bathroom > [ > ] + /.the lines , by the conventions of this notation , provide times for the previous transcription lines , in milliseconds relative to the beginning of the referenced file .the first two lines of this transcript might then be represented graphically as in figure [ chat2 ] .observe that the gap between the conversational turns results in a disconnected graph .note also that the annotations in the original chat file included a file name ; see [ sec : associations ] for a discussion of associations between annotations and files .the representation in figure [ chat2 ] is inadequate , for it treats entire phrases as atomic arc labels , complicating indexing and search .we favor the representation in figure [ chat1 ] , where labels have uniform ontological status regardless of the presence vs. absence of time references .observe that most of the nodes in figure [ chat1 ] _ could _ have been given time references in the chat format but were not .our approach maintains the same topology regardless of the sparseness of temporal information .notice that some of the tokens of the transcript , i.e. the punctuation marks , are conceptually not references to discrete stretches of time in the same way that orthographic words are .( the distinction could be reflected by choosing a different type for punctuation labels . )evidently it is not always meaningful to assign time references to the nodes of an annotation .we shall see a more pervasive example of this atemporality in the next section .lacito langues et civilisations tradition orale is a cnrs organization concerned with research on unwritten languages .the lacito linguistic data archiving project was founded to conserve and distribute the large quantity of recorded , transcribed speech data collected by lacito members over the last three decades . in this section we discuss a transcription for an utterance in hayu , a tibeto - burman language of nepal .the gloss and free translation are in french .xml version=``1.0 '' encoding=``iso-8859 - 1 '' ? >< ! doctype archive system `` archive.dtd '' >< archive > < header >< title > deux soeurs</title > < soundfile href=``hayu.wav''/ > < /header > < text lang=``hayu '' > <s id=``s1 '' > < transcr > < w > nakpu</w > < w > nonotso</w > < w > si&#x014b;</w > < w > pa</w> < w > la&#x0294;natshem</w > < w > are.</w > < /transcr > < audio type=``wav '' start=``0.0000 '' end=``5.5467''/ > < traduc > on raconte que deux soeurs all&egrave;rent un jour chercher du bois.</traduc >< motamot > < w > deux</w > < w > soeurs</w > < w > bois</w > < w > faire</w > < w > all&egrave;rent(d)</w >< w > dit.on.</w > < /motamot > < /s > < /text >< /archive > 15ex a possible graphical representation of the annotation of the sentence , expressed as a labeled directed acyclic graph of the type under discussion , is shown in figure [ archivage ] .here we have three types of edge labels : for the words of the hayu story ; for a word - by - word interlinear translation into french ; and for a phrasal translation into french .( we have taken a small liberty with the word - by - word annotation in the original file , which is arranged so that the ( for ` word ' ) tokens in the hayu are in one - to - one correspondence with the tokens in the french interlinear version . in such cases ,it is normal for individual morphemes in the source language to correspond to several morphemes in the target language .this happens twice in the sentence in question , and we have split the interlinear translations to reflect the natural tokenization of the target language . ) in this example , the time references ( which are in seconds ) are again given only at the beginning and end of the phrase , as required by the lacito archiving project format . nevertheless , the individual hayu words have temporal extent and one might want to indicate that in the annotation .observe that there is no meaningful way of assigning time references to word boundaries in the phrasal translation .whether the time references happen to be unknown , as in the upper half of figure [ archivage ] , or are intrinsically un - knowable , as in the lower half of figure [ archivage ] , we can treat the , and annotations in identical fashion .the linguistic data consortium ( ldc ) is an open consortium of universities , companies and government research laboratories , hosted by the university of pennsylvania , that creates , collects and publishes speech and text databases , lexicons , and similar resources . since its foundation in 1992 , it has published some 150 digital databases , most of which contain material that falls under our definition of ` linguistic annotation . 'the hub-4 english broadcast news corpora from the ldc contain some 200 hours of speech data with sgml annotation [ ] . about 60 hours of similar material has been published in mandarin and spanish , and an additional corpus of some 700 hours of english broadcast material will be published this year .what follows is the beginning of a radio program transcription from the hub-4 corpus .< background type = music time=0.000 level = high > < background type = music time=4.233 level = low > < section s_time=4.233 e_time=59.989 type = filler > < segment s_time=4.233 e_time=13.981 speaker=``tad_bile '' fidelity = low mode = spontaneous > it will certainly make some of these districts more competitive than they have been < sync time=8.015 > so there will be some districts which are republican < sync time=11.040> but all of a sudden they may be up for grabs < /segment > < segment s_time=13.981 e_time=40.840 speaker=``noah_adams '' fidelity = high mode = planned > politicians get the maps out again < sync time=15.882 > for friday june fourteenth this is n. p. r.s all things considered < sync time=18.960 > < background type = music time=23.613 level = low > < sync time=23.613 > in north carolina and other states officials are trying to figure out the effects of the supreme court ruling against minority voting districts breath < sync time=29.454 > a business week magazine report of a federal criminal investigation breath < sync time=33.067 > into the cause and the aftermath of the valujet crash in florida breath < sync time=36.825 > efforts in education reform breath and the question will the public pay < /segment > transcriptions are divided into sections ( see the tag ) , where each section consists of a number of blocks . at various times during a segment a element is inserted to align a word boundary with an offset into a speech file .elements specifying changes in background noise and signal quality function independently of the hierarchy .for example , a period of background music might bridge two segments , beginning in one segment and ending in the next .figure [ ldc1 ] represents the structure of this annotation .dotted arcs represent elided material , is for words and is for background music level .the ldc - published callhome corpora include digital audio , transcripts and lexicons for telephone conversations in several languages .the corpora are designed to support research on speech recognition algorithms [ ] .the transcripts exhibit abundant overlap between speaker turns in two - way telephone conversations .what follows is a typical fragment of an annotation .each stretch of speech consists of a begin time , an end time , a speaker designation ( ` a ' or ` b ' in the example below ) , and the transcription for the cited stretch of time .we have augmented the annotation with and to indicate partial and total overlap ( respectively ) with the previous speaker turn .962.68 970.21 a : he was changing projects every couple of weeks and he said he could nt keep on top of it .he could nt learn the whole new area * 968.71 969.00 b : 970.35 971.94 a : that fast each time . *971.23 971.42 b : 972.46 979.47 a : was diagnosed as having attention deficit disorder . which 980.18 989.56 a : you know , given how he s how far he s gotten , you know , he got his degree at & tufts and all , i found that surprising that for the first time as an adult they re diagnosing this .+ 989.42 991.86 b : + 991.75 994.65 a : yeah , but that s what he said . and * 994.19 994.46 b : yeah . 995.21 996.59 a : he + 996.51997.61 b : whatever s helpful .+ 997.40 1002.55 a : right .so he found this new job as a financial consultant and seems to be happy with that .1003.14 1003.45 b : good .+ 1003.06 1006.27 a : and then we saw & leo and & julie at christmas time .* 1005.45 1006.00 b : uh - huh .1006.70 1009.85 a : and they re doing great .+ 1009.25 1010.58 b : he s in & new & york now , right ?+ 1010.19 1013.55 a : a really nice house in & westchester .yeah , an o- + 1013.38 1013.61 b : good .+ 1013.52 1018.57 a : an older home that you know & julie is of course carving up and making beautiful .* 1018.15 1018.40 b : uh - huh .1018.68 1029.75 a : now she had a job with an architectural group when she first got out to & new & york , and that did nt work out .she said they had her doing things that she really was nt qualified to do long turns ( e.g. the period from 972.46 to 989.56 seconds ) were broken up into shorter stretches for the convenience of the annotators .thus this format is ambiguous as to whether adjacent stretches by the same speaker should be considered parts of the same unit , or parts of different units in translating to an annotation graph representation , either choice could be made .however , the intent is clearly just to provide additional time references within long turns , so the most appropriate choice seems to be to merge abutting same - speaker structures while retaining the additional time - marks . a section of this annotation including an example of total overlap is represented in annotation graph form in figure [ callhome ] .the turns are attributed to speakers using the type .all of the words , punctuation and disfluencies are given the type , though we could easily opt for a more refined version in which these are assigned different types .observe that the annotation graph representation preserves the non - explicitness of the original file format concerning which of speaker a s words overlap which of speaker b s words .of course , additional time references could specify the overlap down to any desired level of detail ( including to the level of phonetic segments or acoustic events if desired ) . the us national institute of standards and technology ( nist ) has recently developed a set of annotation conventions ` intended to provide an extensible universal format for transcription and annotation across many spoken language technology evaluation domains ' .this ` universal transcription format ' ( utf ) was based on the ldc broadcast news format , previously discussed .a key design goal for utf was to provide an sgml - based format that would cover both the ldc broadcast transcriptions and also various ldc - published conversational transcriptions , while also providing for plausible extensions to other sorts of material .a notable aspect of utf is its treatment of overlapping speaker turns . in the following fragment ( from the hub-4 1997 evaluation set ) ,overlapping stretches of speech are marked with the ( begin overlap ) and ( end overlap ) tags .< turn speaker=``roger_hedgecock '' spkrtype=``male '' dialect=``native '' starttime=``2348.811875 '' endtime=``2391.606000 '' mode=``spontaneous '' fidelity=``high '' > ...< time sec=``2378.629937 '' > now all of those things are in doubt after forty years of democratic rule in <b_enamex type=``organization''>congress < e_enamex > < time sec=``2382.539437 '' > \{breath because < contraction e_form=``[you=>you][ve=>have]''>youve got quotas \{breath and set < hyphen > asides and rigidities in this system that keep you < time sec=``2387.353875 '' > on welfare and away from real ownership \{breath and < contraction e_form=``[that=>that][s=>is]''>that s a real problem in this < b_overlap starttime=``2391.115375 '' endtime=``2391.606000 '' > country < e_overlap > < /turn > < turn speaker=``gloria_allred '' spkrtype=``female '' dialect=``native '' starttime=``2391.299625 '' endtime=``2439.820312 '' mode=``spontaneous '' fidelity=``high '' > < b_overlap starttime=``2391.299625 '' endtime=``2391.606000 ''> well i < e_overlap > think the real problem is that < time sec=``2395.462500 '' > i see as code words for discrimination ... < /turn> 0ex observe that there are two speaker turns , where the first speaker s utterance of ` country ' overlaps the second speaker s utterance of ` well i ' .note that the time attributes for overlap are not required to coincide , since they are aligned to ` the most inclusive word boundaries for each speaker turn involved in the overlap ' .the coincidence of end times in this case is almost surely an artifact of the user interface of the system used to create the annotations , which required overlaps to be specified relative to word boundaries .the structure of overlapping turns can be represented using annotation graphs as shown in figure [ utf ] .each speaker turn is a separate connected subgraph , disconnected from other speaker turns .this situation neatly reflects the fact that the time courses of utterances by various speakers in conversation are logically asynchronous .observe that the information about overlap is implicit in the time references and that partial word overlap can be represented .this seems like the best choice in general , since there is no necessary logical structure to conversational overlaps at base , they are just two different actions unfolding over the same time period .the cited annotation graph structure is thus less explicit about word overlaps than the utf file .however , if a more explicit symbolic representation of overlaps is desired , specifying that such - and - such a stretch of one speaker turn is associated with such - and - such a stretch of another speaker turn , this can be represented in our framework using the inter - arc linkage method described in [ sec : multiple ] , or using the extension described in [ sec : extensions ] . of course , the same word - boundary - based representation of overlapping turns could also be expressed in annotation graph form , by allowing different speakers transcripts to share certain nodes ( representing the word boundaries at which overlaps start or end ) .we do not suggest this , since it seems to us to be based on an inappropriate model of overlapping , which will surely cause trouble in the end . note the use of the` lexical ' type to include the full form of a contraction .the utf format employed special syntax for expanding contractions .no additional ontology was needed in order to do this in the annotation graph .( a query to find instances of or would simply disjoin over the types . )note also that it would have been possible to replicate the type system , replacing with for ` speaker 1 ' and for ` speaker 2 ' .however , we have chosen instead to attribute material to speakers using the type on an arc spanning an entire turn .the disconnectedness of the graph structure means there can be no ambiguity about the attribution of each component arc to a speaker .as we have argued , annotation graphs of the kind shown in figure [ utf ] are actually more general and flexible than the utf files they model .the utf format imposes a linear structure on the speaker turns and assumes that overlap only occurs at the periphery of a turn .in contrast , the annotation graph structure is well - behaved for partial word overlap , and it scales up naturally and gracefully to the situation where multiple speakers are talking simultaneously ( e.g. for transcribing a radio talk - back show with a compere , a telephone interlocutor and a panel of discussants ) .it also works for arbitrary kinds of overlap ( e.g. where one speaker turn is fully contained inside another ) , as discussed in the previous section .the emu speech database system grew out of the earlier mu+ ( macquarie university ) system , which was designed to support speech scientists who work with large collections of speech data , such as the australian national database of spoken language [ ] .emu permits hierarchical annotations arrayed over any number of levels , where each level is a linear ordering .an annotation resides in a single file linked to an xwaves label file .the file begins with a declaration of the levels of the hierarchy and the immediate dominance relations .level utterance level intonational utterance level intermediate intonational level word intermediate level syllable word level phoneme syllable level phonetic phoneme many - to - many the final line licenses a many - to - many relationship between phonetic segments and phonemes , rather than the usual many - to - one relationship . according to the user s manual ,this is only advisable at the bottom of the hierarchy , otherwise temporal ambiguities may arise . at any given level of the hierarchy , the elements may have more than one attribute .for example , in the following declarations we see that elements at the level may be decorated with and information , while syllables may carry a pitch accent .label word accent label word text label syllable pitch_accent the next line sets up a dependency between the level and an xwaves label file linked to esps - formatted audio data .labfile phonetic : format esps : type segment : mark end : extension lab : time - factor 1000 the declaration distinguishes ` segments ' with duration from ` events ' which are instantaneous . here , the time associated with a segment will mark its endpoint rather than its starting point , as indicated by the declaration .the timing information from the label file is adopted into the hierarchy ( scaled from to ms ) , and can propagate upwards .in this way , the end of a phonetic segment may also become the end of a syllable , for example .the sequence of labels from the xwaves label file is reproduced in the emu annotation , while the timing information remains in the xwaves label file .therefore the latter file is an essential part of an emu annotation and must be explicitly referenced .the labels are assigned unique numerical identifiers , as shown below for the sentence ` the price range is smaller than any of us expected ' .( for compactness , multiple lines have been collapsed to a single line . ) phonetic phonetic 0 d 9 @ 11 p 16 h 17 or 19 r 20 ai 22 s 24 or 30r 31 ei 33 n 35 z 37 i 44 zs 50 om 52 m 53 o : 55 l 58 @ 60 d 65 @ 67 n 69 ec 76 e 77 n 80 i : 82 @ 88 v 90 @ 95 s 97 i 102 k 104 h 105 s 109 p 111 h 112 e 114 k 116 h 117 t 120 h 121 @ 123 d 125 h the labels on the more abstract , phonemic level are assigned a different set of numerical identifiers .phoneme phoneme 1 d 10 @ 12 p 18 r 21 ai 23 s 25 r 32 ei 34 n 36 z 38 i 45 z 46 s 51 m 54 o : 56 l 59 @ 61 d 66 @ 68 n 70 e 78 n 81 i : 83 @ 89 v 91 @ 96 s 98 i 103 k 106 s 110 p 113 e 115 k 118 t 122 @ 124 d here is the remainder of the hierarchy .utterance utterance 8 intonational intonational 7 l intermediate intermediate 5 l- 42 l- 74 l- word word accent text 2 f w the 13 c s price 26 c s range 39 f w is 47 c s smaller 62 f w than 71 f s any 84 f w of 92 f w us 99 c s expected syllable syllable pitch_accent 4 w 15 s h * 28 s !h * 41 w 49 s h * 57 w 64 w 73 s 79 w h * 86 w 94 w 101 w 108 s h * 119 w a separate section of an emu annotation file lists each identifier , followed by all those identifiers which it dominates .for example , the line states that the first syllable ( id=4 ) directly or indirectly dominates phonetic segments ( id=0 ) and ( id=9 ) and phonemes ( id=1 ) and ( id=10 ) .the first intermediate phrase label ( id=5 ) dominates this material and much other material besides : 5 0 1 2 4 9 10 11 12 13 15 16 17 18 19 20 21 22 23 24 25 26 28 30 31 32 33 34 35 36 this exhaustive approach greatly facilitates the display of parts of the annotation hierarchy . if the syllable level is switched off , it is a trivial matter to draw lines directly from words to phonemes .the first three words of this annotation are displayed as an annotation graph in figure [ emu1 ] .here is used for phonetic segments , for phonemes and for strong ( ) and weak ( ) syllables . the festival speech synthesis system is driven by richly - structured linguistic input .the festival data structure , called a ` heterogeneous relation graph ' ( hrg ) is a collection of binary relations over attribute - value matrices ( avms ) .each matrix describes the local properties of some linguistic unit , such as a segment , a syllable , or a syntactic phrase .the value of an attribute could be atomic ( such as a binary feature or a real number ) , or another ( nested ) avm , or a function .functions have the ability to traverse one or more binary relations and incorporate values from other avms .for example , if duration was an attribute of a syllable , its value would be a function subtracting the start time of the first dominated segment from the end time of the last dominated segment .typically , each level of structure includes these function - valued attributes so that temporal information is correctly propagated and does not need to be stored more than once .0ex an example hrg is shown in figure [ taylor ] .each box contains an abbreviated form of an avm .the lines represent the binary relations .observe , for example , that the phonemes and the surface segments are organized into two sequences , the two parallel lines spanning the bottom of the figure .each sequence is a distinct binary relation .the hierarchical structures of the metrical and the syllable trees are two more binary relations .and the linear ordering of words is still another binary relation .figure [ festival ] gives the annotation graph representing the second half of the hrg structure .given the abundance of arcs and levels , we have expanded the vertical dimension of the nodes , but this is not significant .node identifiers and time references have been omitted .like the hrg , the annotation graph represents temporal information only once .yet unlike the hrg , there is no need to define explicit propagation functions .a diverse range of annotation models have now been considered .our provision of annotation graphs for each one already gives a foretaste of the formalism we present in [ sec : algebra ] .however , before launching into the formalism , we want to stand back from the details of the various models , and try to take in the big picture . in this sectionwe describe a wide variety of architectural issues which we believe should be addressed by any general purpose model for annotating linguistic signals . in the discussion of childes andthe lacito archiving project above , there were cases where our graph representation had nodes which bore no time reference .perhaps times were not measured , as in typical annotations of extended recordings where time references might only be given at major phrase boundaries ( c.f .childes ) . or perhaps time measurements were not applicable in principle , as for phrasal translations ( c.f . the lacito archiving project ) .various other possibilities suggest themselves .we might create a segment - level annotation automatically from a word - level annotation by looking up each word in a pronouncing dictionary and adding an arc for each segment , prior to hand - checking the segment annotations and adding time references to the newly created nodes .the annotation should remain well - formed ( and therefore usable ) at each step in this enrichment process . just as the temporal information may be partial ,so might the label information .for example , we might label indistinct speech with whatever information is available `so - and - so said something here that seems to be two syllables long and begins with a /t/ ' . beyond these two kinds of partiality ,there is an even more obvious kind of partiality we should recognize .an annotated corpus might be annotated in a fragmentary manner .it might be that only 1% of a certain recording has any bearing on the research question that motivated the collection and annotation work .therefore , it should be possible to have a well - formed annotation structure with arbitrary amounts of annotation detail at certain interesting loci , and limited or no detail elsewhere .this is a typical situation in phonetic or sociolinguistic research , where a large body of recordings may be annotated in detail with respect to a single , relatively infrequent phenomenon of interest .naturally , one could always extract a sub - corpus and annotate that material completely , thereby removing the need for partiality , but this may have undesirable consequences for managing a corpus : ( i ) special intervention is required each time one wants to expand the sub - corpus as the research progresses ; ( ii ) it is difficult to make annotations of a sub - corpus available to someone working on a related research question with an overlapping sub - corpus , and updates can not be propagated easily ; ( iii ) provenance issues arise , e.g. it may be difficult to identify the origin of any given fragment , in case access to broader context is necessary to retrieve the value of some other independent variable one might need to know ; and ( iv ) it is difficult to combine the various contributions into the larger task of annotating a standard corpus for use in perpetuity . by pointing out these problemswe do not mean to suggest that all annotations of a corpus should be physically or logically combined .on the contrary , even with one physical copy of a corpus , we would want to allow several independent ( partial ) annotations to coexist , where these may be owned by different people and stored remotely from each other . nordo we wish to suggest that the creation of sub - corpora is never warranted .the point is simply that an annotation formalism should not force users to create a derived corpus just so that a partial annotation is well - formed .existing annotated speech corpora always involve a hierarchy of several levels of annotation , even if they do not focus on very elaborate types of linguistic structure .timit has sentences , words and phonetic segments ; a broadcast news corpus may have designated levels for shows , stories , speaker turns , sentences and words .some annotations may express much more elaborate hierarchies , with multiple hierarchies sometimes created for a single underlying body of speech data .for example , the switchboard corpus of conversational speech began with the three basic levels : conversation , speaker turn , and word .various parts of it have since been annotated for syntactic structure , for breath groups and disfluencies , for speech act type , and for phonetic segments .these various annotations have been done as separate efforts , and presented in formats that are fairly easy to process one - by - one , but difficult to compare or combine .considering the variety of approaches that have been adopted , it is possible to identify at least three general methods for encoding hierarchical information .token - based hierarchy : : here , hierarchical relations among annotations are explicitly marked with respect to particular tokens : ` this particular segment is a daughter of this particular syllable . 'systems that have adopted this approach include partitur , emu and festival .type - based hierarchy : : here , hierarchical information is given with respect to types whether once and for all in the database , or ad hoc by a user , or both .in effect , this means that a grammar of some sort is specified , which induces ( additional ) structure in the annotation .this allows ( for instance ) the subordination of syllables to words to be indicated , but only as a general fact about all syllables and words , not as a specific fact about particular syllables and words .an sgml dtd is an example of this : it specifies a context - free grammar for any textual markup that uses it . in some cases ,the hierarchical structure of a particular stretch of sgml markup can not be determined without reference to the applicable dtd .graph - based hierarchy : : here , annotations are akin to the arcs in so - called ` parse charts ' .a parse chart is a particular kind of acyclic digraph , which starts with a string of words and then adds a set of arcs representing hypotheses about constituents dominating various substrings . in such a graph , if the substring spanned by arc properly contains the substring spanned by arc , then the constituent corresponding to must dominate the constituent corresponding to ( though of course other structures may intervene ) .hierarchical relationships are encoded in a parse chart only to the extent that they are implied by this graph - wise inclusion thus two arcs spanning the same substring are unspecified as to their hierarchical relationship , and arcs ordered by temporal inclusion acquire a hierarchical relationship even when this is not appropriate given the types of those arcs ( though a grammar , external to the parse chart for a particular sentence , may settle the matter ; see also [ sec : hierarchy - local ] ) .+ as we have seen , many sorts of linguistic annotations are naturally encoded as graph structures with labeled arcs and time - marked nodes .such a representation arises naturally from the fact that elementary annotations are predicates about stretches of signal .thus in our timit example , we can construe the underlying sequence of audio samples as a sort of terminal string , with annotations representing hypotheses about constituents of various types that dominate designated subsequences .in the example cited , the word ` she ' spans the sequence from sample 2360 to sample 5200 ; the phoneme /sh/ spans the sequence from 2360 to 3720 ; and the phoneme /iy/ spans the sequence from 3720 to 5200 .this graph structure itself implies a sort of hierarchical structure based on temporal inclusion .if we interpret it as a parse chart , it tells us that the word ` she ' dominates the phoneme sequence /shiy/. examples of annotation systems that encode hierarchy using this approach are timit , childes and delta .( note that , once equipped with the full annotation graph formalism , we will be able to distinguish graph - based and time - based inclusion , conflated here . )a particular system may present some mixture of the above techniques .thus an sgml labeled bracketing may specify an unambiguous token - based hierarchy , with the applicable dtd grammar being just a redundant type - based check ; but in some cases , the dtd may be necessary to determine the structure of a particular stretch of markup .similarly , the graph structures implicit in timit s annotation files do not tell us , for the word spelled ` i ' and pronounced /ay/ , whether the word dominates the phoneme or vice versa ; but the structural relationship is implicit in the general relationship between the two types of annotations . an annotation framework ( or its implementation ) may also choose to incorporate arbitrary amounts of redundant encoding of structural information .it is often convenient to add redundant links explicitly from children to parents , from parents to children , from one child to the next in order , and so on so that a program can navigate the structure in a way that is clearer or more efficient .although such redundant links can be specified in the basic annotation itself as in _ festival _ they might equally well be added automatically , as part of a compilation or indexing process .in our view , the addition of this often - useful but predictable structure should not be an intrinsic part of the definition of general - purpose annotation structures .we want to distinguish the annotation formalism itself from various enriched data structures with redundant encoding of hierarchical structure , just as we would distinguish it from various indices for convenient searching of labels and label sequences . in considering how to encode hierarchical information ,we start from the premise that our representation will include some sort of graph structure , simply because this is the most fundamental and natural sort of linguistic annotation . given this approach , hierarchical structure can often be read off the annotation graph structure , as was suggested informally above and will be discussed more thoroughly in [ sec : algebra ] .for many applications , this will be enough . for the residual cases, we might add either type - based or token - based encoding of hierarchical information ( see [ sec : extensions ] ) .based on the formal precedent of sgml , the model of how chart - like data structures are actually used in parsing , and the practical precedents of databases like timit , it is tempting to consider adding a sort of grammar over arc labels as part of the formal definition of annotation graphs .however , in the absence of carefully - evaluated experience with circumstances in which this move is motivated , we prefer to leave this as something to be added by particular applications rather than incorporated into the formalism . in any case, we shall argue later ( see [ sec : multiple ] ) that we need a more general method to encode optional relationships among particular arcs .this method permits token - based marking of hierarchical structure as a special case .we also need to mention that particular applications in the areas of creation , query and display of annotations may be most naturally organized in ways that motivate a user interface based on a different sort of data structure than the one we are proposing .for instance , it may sometimes be easier to create annotations in terms of tree - like dominance relations rather than chart - like constituent extents , for instance in doing syntactic tree - banking .it may likewise be easier in some cases to define queries explicitly in terms of tree structures . and finally , it may sometimes be more helpful to display trees rather than equivalent annotation graphs the festival example in [ sec : festival ] was a case in point .we believe that such user interface issues will vary from application to application , and may even depend on the tastes of individuals in some cases . in any case , decisions about such user interface issues are separable from decisions about the appropriate choice of basic database structures .in addition to the hierarchical and sequential structuring of information about linguistic signals , we also have parallel structuring .nowhere is this clearer than in the gestural score notation used to describe the articulatory component of words and phrases ( e.g. ) . a gestural score maps out the time course of the gestural events created by the articulators of the vocal tract .this representation expresses the fact that the articulators move independently and that the segments we observe are the result of particular timing relationships between the gestures .figure [ tenpin3 ] gives the annotation graph for a gestural score .it shows the activity of the velum , the tongue tip and the lips .this example stands in stark contrast to the hierarchical structures discussed in the previous section .here there is no hierarchical relationship between the streams . ' '' ''another important difference between hierarchical and parallel structures needs to be drawn here .suppose that two labeled periods of an annotation begin ( or end ) at the same time .the alignment of two such boundaries might be necessary , or pure coincidence .as an example of necessary alignment , consider the case of phrase - initial words . here, the left boundary of a phrase lines up with the left boundary of its initial word . changingthe time of the phrase boundary should change the time of the word boundary , and vice versa . in the general case ,an update of this sort must propagate both upwards and downwards in the hierarchy .in fact , we argue that these two pieces of annotation actually _ share _ the same boundary : their arcs emanate from a single node . changing the time reference of that node does not need to propagate anywhere , since the information is already shared by the relevant arcs . as an example of coincidental alignment ,consider the case of gestural scores once more .in 100 annotated recordings of the same utterance we might find that the boundaries of different gestures occasionally coincide .an example of this appears in figure [ tenpin3 ] , where nodes 12 and 22 have the same time reference .however , this alignment is a contingent fact about a particular utterance token .an edit operation which changed the start time of one gesture would usually carry no implication for the start time of some other gesture .even though a linguistic event might have duration , such as the attainment of a pitch target , the most perspicuous annotation may be tied to an instant rather than an interval .some annotation formalisms ( e.g. emu , festival , partitur ) provide a way to label instants .the alignment of these instants with respect to other instants or intervals can then be investigated or exploited .there are at least five conceivable approaches to labeled instants ( note that this is not a mutually exclusive set ) : 1 .nodes could be optionally labeled ; or 2 .an instant can be modeled as a self - loop on a node , and again labeled just like any other arc ; or 3 .instants can be treated as arcs between two nodes with the same time reference ; or 4 .instants can be treated as short periods , where these are labeled arcs just like any other ; or 5 .certain types of labels on periods could be interpreted as referring to the commencement or the culmination of that period . with little evidence on which to base a decision between these options we opt for the most conservative , which is the one embodied in the last two options .thus with no extension to the ontology we already have two ways to model instants .as we have seen , annotations are often stratified , where each layer describes a different property of a signal .what are the possible temporal relationships between the pieces of a given layer ?some possibilities are diagrammed in figure [ layer ] , where a point is represented as a vertical bar , and an interval is represented as a horizontal line between two points . ' '' '' in the first row of figure [ layer ] , we see a layer which exhaustively partitions the time - flow into a sequence of non - overlapping intervals ( or perhaps intervals which overlap just at their endpoints ) . in the second rowwe see a layer of discrete instants .the next two rows illustrate the notions of gaps and overlaps .gaps might correspond to periods of silence , or to periods in between the salient events , or to periods which have yet to be annotated .overlaps occur between speaker turns in discourse ( see figure [ callhome ] ) or even between adjacent words in a single speech stream ( see figure [ links]a ) .the fifth row illustrates a hierarchical grouping of intervals within a layer ( c.f .the arcs in figure [ festival ] ) .the final row contains an arbitrary set of intervals and instants .we adopt this last option ( minus the instants ) as the most general case for the layer of an annotation .as we shall see , layers themselves will not be treated specially ; a layer can be thought of simply as the collection of arcs sharing the same type information .it is often the case that a given stretch of speech has multiple possible labels .for example , the region of speech corresponding to a monosyllabic word is both a syllable and a word , and in some cases it may also be a complete utterance .the combination of two independent annotations into a single annotation ( through set union ) may also result in two labels covering the same extent . in the general case, a label could be a ( typed ) attribute - value matrix , possibly incorporating nested structure , list- and set - valued attributes , and even disjunction . however , our hypothesis is that typed labels ( with atomic types and labels ) are sufficient .multiple labels spanning the same material reside on their own arcs .their endpoints can be varied independently ( see [ sec : scores ] ) , and the combining and projection of annotations does not require the merging and splitting of arcs .an apparent weakness of this conception is that we have no way of individuating arcs , and it is not possible for arcs to reference each other .however , there are cases when such links between arcs are necessary .three examples are displayed in figure [ links ] ; we discuss each in turn . ' '' '' recall from [ sec : scores ] that an annotation graph can contain several independent streams of information , where no nodes are shared between the streams .the temporal extents of the gestures in the different streams are almost entirely asynchronous ; any equivalences are likely to be coincidences . however , these gestures may still have determinate abstract connections to elements of a phonological analysis .thus a velar opening and closing gesture may be associated with a particular nasal feature , or with a set of nasal features , or with the sequence of changes from non - nasal to nasal and back again . butthese associations can not usually be established purely as a matter of temporal coincidence , since the phonological features involved are bundled together into other units ( segments or syllables or whatever ) containing other features that connect to other gestures whose temporal extents are all different .the rules of coordination for such gestures involve phase relations and physical spreading which are completely arbitrary from the perspective of the representational framework .a simplified example of the arbitrary relationship between the gestures comprising a word is illustrated in figure [ links]a .we have the familiar annotation structure ( taken from figure [ tenpin3 ] ) , enriched with information about which words license which gestures . the words are shown as overlapping , although this is not crucially required . in the general case ,the relationship between words and their gestures is not predictable from the temporal structure and the type structure alone .the example in figure [ links]b shows a situation where we have multiple independent transcriptions of the same data . in this case, the purpose is to compare the performance of different transcribers on identical material .although the intervals do not line up exactly , an obvious correspondence exists between the labels and it should be possible to navigate between corresponding labels , even though their precise temporal relationship is somewhat arbitrary . observe that the cross references do not have equivalent status here ; the relationship between and is not the same as that between and .the final example , figure [ links]c , shows an annotation graph based on the hayu example from figure [ archivage ] .we would like to be able to navigate between words of a phrasal translation and the corresponding hayu words .this would be useful , for example , to study the various ways in which a particular hayu word is idiomatically translated .note that the temporal relationship between linked elements is much more chaotic here , and that there are examples of one - to - many and many - to - many mappings .the words being mapped do not even need to be contiguous subsequences .one obvious way to address these three examples is to permit arc labels to carry cross - references to other arc labels .the semantics of such cross - references might be left up to the individual case .this requires at least some arcs to be individuated ( as all nodes are already ) . while it would be a simple matter to individuate arcs ( c.f . [ sec : extensions ] ) , this step is not forced on us . there is another approach that stays more nearly within the confines of the existing formalism . in this approach, we treat all of the cases described above in terms of equivalence classes . one way to formalize a set of equivalence classes is as an ordered pair : class - type : identifier .but this is just our label notation all over again the only news is that for label types interpreted as denoting equivalence classes , different labels with the same identifier are viewed as forming an equivalence class .another way to put this is that two ( or more ) labels are connected not by referencing one another , but by jointly referencing a particular equivalence class . in the general case, we have partially independent strands , where the material to be associated comes from some subset of the strands . within a given strand , zero, one or more arcs may participate in a given association , and the arcs are not necessarily contiguous . for the gestural score in figure [ links]a we augment each arc with a second arc having the same span .these additional arcs all carry the type and the unique labels ( say ) and , depending on which word they belong to .the word arcs are also supplemented : with and with .see figure [ links2]a .now we can easily navigate around the set of gestures licensed by a word regardless of their temporal extent .we can use the type information on the existing labels in situations where we care about the directionality of the association . ' '' ''this approach can be applied to the other cases , with some further qualifications . for figure [ links]b ,there is more than one option , as shown in figure [ links2]b , b. in the first option , we have a single cross - reference , while in the second option , we have two cross - references .we could combine both of these into a single graph containing three cross - references .the translation case of figure [ links]c can be treated in the same way .if the phrasal translation of a word is a continuous stretch , it could be covered by multiple arcs ( one for each existing arc ) , or it could be covered by just a single arc . if the phrasal translation of a word is not a contiguous stretch , we may be forced to attach more than one diacritic arc with a given label .we do not anticipate any adverse consequences of such a move .incidentally , note that this linked multiple stream representation is employed in an actual machine translation system .observe that this construction involves assigning intervals ( node - pairs ) rather than arcs to equivalence classes . in cases where there are multiple independent cross references ,it is conceivable that we might have distinct equivalence classes involving different arcs which span the same two nodes .so long as these arcs are distinguished by their types we do not foresee a problem .this section has described three situations where potentially complex relationships between arc labels are required .however , we have demonstrated that the existing formalism is sufficiently expressive to encompass such relationships , and so we are able to preserve the simplicity of the model . despite this simplicity, there is one way in which the approach may seem profligate .there are no less than three ways for a pair of arcs to be ` associated ' : temporal overlap , hierarchy , and equivalence - class linkages .interestingly , this three - way possibility exactly mirrors the three ways that association is treated in the phonological literature .there , association is first and foremost a graphical notion . from contextit is usually possible to tell whether the line drawn between two items indicates temporal overlap , a hierarchical relationship , or some more abstract , logical relationship .we have shown how all three uses are attested in the realm of linguistic annotation .the fact that the three conceptions of association are distinct and attested is sufficient cause for us to include all three in the formalism , notwithstanding the fact that we get them for free .an ` annotated corpus ' is a set of annotation graphs and an associated body of time series data .the time series might comprise one or more audio tracks , one or more video streams , one or more streams of physiological data of various types , and so forth .the data might be sampled at a fixed rate , or might consist of pairs of times and values , for irregularly spaced times .different streams will typically have quite different sampling rates .some streams might be defined only intermittently , as in the case of a continuous audio recording with intermittent physiological or imaging data .this is not an imagined list of conceptually possible types of data we are familiar with corpora with all of the properties cited .the time series data will be packaged into a set of one or more files .depending on the application , these files may have some more or less complex internal structure , with headers or other associated information about type , layout and provenance of the data .these headers may correspond to some documented open standard , or they may be embedded in a proprietary system .the one thing that ties all of the time series data together is a shared time base . to use these arbitrarily diverse data streams , we need to be able to line them up time - wise .this shared time base is also the only pervasive and systematic connection such data is likely to have with annotations of the type we are discussing in this paper .it is not appropriate for an annotation framework to try to encompass the syntax and semantics of all existing time series file formats .they are simply too diverse and too far from being stable .however , we do need to be able to specify what time series data we are annotating , and how our annotations align with it , in a way that is clear and flexible .an ambitious approach would be to specify a new universal framework for the representation of time series data , with a coherent and well - defined semantics , and to insist that all annotated time series data should be translated into this framework .after all , we are doing the analogous thing for linguistic annotations : proposing a new , allegedly universal framework into which we argue that all annotations can be translated .such an effort for all time series data , whether or not it is a reasonable thing to do , is far outside the scope of what we are attempting here . a much simpler and less ambitious way to connect annotation graphs to their associated time series is to introduce arcs that reference particular time - series files , or temporally contiguous sub - parts of such files .each such arc specifies that the cited portion of data in the cited time - function file lines up with the portion of the annotation graph specified by the time - marks on its source and sink nodes .arbitrary additional information can be provided , such as an offset relative to the file s intrinsic time base ( if any ) , or a specification selecting certain dimensions of vector - valued data . taking this approach ,a single annotation could reference multiple files some parts of an annotation could refer specifically to a single file , while other parts of an annotation could be non - specific . in this way ,events that are specific to a channel ( like a particular speaker turn ) can be marked as such .equally , annotation content for an event which is not specific to a channel can be stored just once .these file - related labels , if properly designed and implemented , will permit an application to recover the time - series data that corresponds to a given piece of annotation at least to the extent that the annotation is time - marked and that any time - function files have been specified for the cited subgraph(s ) . thus if time - marking is provided at the speaker - turn level ( as is often the case for published conversational data ) , then a search for all the instances of a specified word string will enable us to recover usable references to all available time - series data for the turn that contains each of these word strings .the information will be provided in the form of file names , time references , and perhaps time offsets ; it will be the responsibility of the application ( or the user ) to resolve these references .if time - marking has been done at the word level , then the same query will enable us to recover a more exact set of temporal references in the same set of files .our preference for the moment is to allow the details of how to define these file - references to fall outside the formalism we are defining here .it should be clear that there are simple and natural ways to establish the sorts of linkages that are explicit in existing types of annotated linguistic database .after some practical experience , it may make sense to try to provide a more formal account of references to external time - series data .we would also like to point out a wider problem for which we do not have any general solution .although it is not our primary focus , we would like the annotation formalism to be extensible to spatially - specific annotations of video signals and similar data , perhaps by enriching the temporal anchors with spatial and/or image - plane information .anthropologists , conversation analysts , and sign - language researchers are already producing annotations that are ( at least conceptually ) anchored not only to time spans but also to a particular spatial or image - plane trajectory through the corresponding series of video frames . in the case of simple time - series annotations , we are tagging nodes with absolute time references , perhaps offset by a single constant for a given recorded signal . however , if we are annotating a video recording , the additional anchoring used for annotating video sequences will mostly not be about absolute space , even with some arbitrary shift of coordinate origin , but rather will be coordinates in the image plane . if there are multiple cameras , then image coordinates for each will differ , in a way that time marks for multiple simultaneous recordings do not .in fact , there are some roughly similar cases in audio annotation , where an annotation might reference some specific two- or three - dimensional feature of ( for instance ) a time - series of short - time amplitude spectra ( i.e. a spectrogram ) , in which case the quantitative details will depend on the analysis recipe .our system allows such references ( like any other information ) to be encoded in arc labels , but does not provide any more specific support . in this contextwe ought to raise the question of how annotation graphs relate to various multimedia standards like the synchronized multimedia integration language [ ] and mpeg-4 [ ] .since these provide ways to specify both temporal and spatial relationships among strings , audio clips , still pictures , video sequences , and so on , one hopes that they will offer support for linguistic annotation .it is hard to offer a confident evaluation , since mpeg-4 is still in development , and smil s future as a standard is unclear . with respect to mpeg-4 , we reserve judgment until its characteristics become clearer .our preliminary assessment is that smil is not useful for purposes of linguistic annotation , because it is mainly focused on presentational issues ( fonts , colors , screen locations , fades and animations , etc . ) and does not in fact offer any natural ways to encode the sorts of annotations that we surveyed in the previous section .thus it is easy to specify that a certain audio file is to be played while a certain caption fades in , moves across the screen , and fades out .it is not ( at least straightforwardly ) possible to specify that a certain audio file consists of a certain sequence of conversational turns , temporally aligned in a certain way , which consist in turn of certain sequences of words , etc .the tipster architecture for linguistic annotation of text is based on the concept of a fundamental , immutable textual foundation , with all annotations expressed in terms of byte offsets into this text .this is a reasonable solution for cases where the text is a published given , not subject to revision by annotators .however , it is not a good solution for speech transcriptions , which are typically volatile entities , constantly up for revision both by their original authors and by others . in the case of speech transcriptions, it is more appropriate to treat the basic orthographic transcription as just another annotation , no more formally privileged than a discourse analysis or a translation .then we are in a much better position to deal with the common practical situation , in which an initial orthographic transcription of speech recordings is repeatedly corrected by independent users , who may also go on to add new types of annotation of their own , and sometimes also adopt new formatting conventions to suit their own display needs .those who wish to reconcile these independent corrections , and also combine the independent additional annotations , face a daunting task . in this case , having annotations reference byte offsets into transcriptional texts is almost the worst imaginable solution .although nothing will make it trivial to untangle this situation , we believe our approach comes close .as we shall see in [ sec : files ] , our use of a flat , unordered file structure incorporating node identifiers and time references means that edits are as strictly local as they possibly can be , and connections among various types of annotation are as durable as they possibly can be .some changes are almost completely transparent ( e.g. changing the spelling of a name ) .many other changes will turn out not to interact at all with other types of annotation .when there is an interaction , it is usually the absolute minimum that is necessary .therefore , keeping track of what corresponds to what , across generations of distributed annotation and revision , is as simple as one can hope to make it .therefore we conclude that tipster - style byte offsets are an inappropriate choice for use as references to audio transcriptions , except for cases where such transcriptions are immutable in principle . in the other direction ,there are several ways to translate tipster - style annotations into our terms. the most direct way would be to treat tipster byte offsets exactly as analogous to time references since the only formal requirement on our time references is that they can be ordered .this method has the disadvantage that the underlying text could not be searched or displayed in the same way that a speech transcription normally could .a simple solution would be to add an arc for each of the lexical tokens in the original text , retaining the byte offsets on the corresponding nodes for translation back into tipster - architecture terms .timit and some other extant databases denominate signal time in sample numbers ( relative to a designated signal file , with a known sampling rate ) .other databases use floating - point numbers , representing time in seconds relative to some fixed offset , or other representations of time such as centiseconds or milliseconds . in our formalization of annotation graphs ,the only thing that really matters about time references is that they define an ordering .however , for comparability across signal types , time references need to be intertranslatable .we feel that time in seconds is generally preferable to sample or frame counts , simply because it is more general and easier to translate across signal representations. however , there may be circumstances in which exact identification of sample or frame numbers is crucial , and some users may prefer to specify these directly to avoid any possibility of confusion . technically , sampled data points ( such as audio samples or video frames ) may be said to denote time intervals rather than time points , and the translation between counts and times may therefore become ambiguous .for instance , suppose we have video data at 30 hz .should we take the 30th video frame ( counting from one ) to cover the time period from 29/30 to 1 second or from 29.5/30 to 30.5/30 second ? in either case , how should the endpoints of the interval be assigned ?different choices may shift the correspondence between times and frame numbers slightly .also , when we have signals at very different sampling rates , a single sampling interval in one signal can correspond to a long sequence of intervals in another signal .with video at 30 hz and audio at 44.1 khz , each video frame corresponds to 1,470 audio samples .suppose we have a time reference of .9833 seconds .a user might want to know whether this was created because some event was flagged in the 29th video frame , for which we take the mean time point to be 29.5/30 seconds , or because some event was flagged at the 43,365th audio sample , for which we take the central time point to be 43365.5/44100 seconds . for reasons like these , some users might want the freedom to specify references explicitly in terms of sample or frame numbers , rather than relying on an implicit method of translation to and from time in seconds . several ways to accommodate this within our framework come to mind , but we prefer to leave this open , as we have no experience with applications in which this might be an issue . in our initial explorations , we are simply using time in seconds as the basis .looking at the practice of speech transcription and annotation across many existing ` communities of practice ' , we see commonality of abstract form along with diversity of concrete format .all annotations of recorded linguistic signals require one unavoidable basic action : to associate a label , or an ordered sequence of labels , with a stretch of time in the recording(s ) .such annotations also typically distinguish labels of different types , such as spoken words vs. non - speech noises .different types of annotation often span different - sized stretches of recorded time , without necessarily forming a strict hierarchy : thus a conversation contains ( perhaps overlapping ) conversational turns , turns contain ( perhaps interrupted ) words , and words contain ( perhaps shared ) phonetic segments .a minimal formalization of this basic set of practices is a directed graph with typed labels on the arcs and optional time references on the nodes .we believe that this minimal formalization in fact has sufficient expressive capacity to encode , in a reasonably intuitive way , all of the kinds of linguistic annotations in use today .we also believe that this minimal formalization has good properties with respect to creation , maintenance and searching of annotations . our strategy is to see how far this simple conception can go , resisting where possible the temptation to enrich its ontology of formal devices , or to establish label types with special syntax or semantics as part of the formalism .see section [ sec : extensions ] for a perspective on how to introduce formal and substantive extensions into practical applications .we maintain that most , if not all , existing annotation formats can naturally be treated , without loss of generality , as directed acyclic graphs having typed labels on ( some of ) the edges and time - marks on ( some of ) the vertices .we call these ` annotation graphs ' .it is important to recognize that translation into annotation graphs does not magically create compatibility among systems whose semantics are different . for instance, there are many different approaches to transcribing filled pauses in english each will translate easily into an annotation graph framework , but their semantic incompatibility is not thereby erased . it is not our intention here to specify annotations at the level of permissible tags , attributes , and values , as was done by many of the models surveyed in [ sec : survey ] .this is an application - specific issue which does not belong in the formalism .the need for this distinction can be brought into sharp focus by analogy with database systems .consider the relationship between the abstract notion of a relational algebra , the features of a relational database system , and the characteristics of a particular database .for example , the definition of substantive notions like ` date ' does not belong in the relational algebra , though there is good reason for a database system to have a special data type for dates .moreover , a particular database may incorporate all manner of restrictions on dates and relations among them .the formalization presented here is targeted at the most abstract level : we want to get the annotation formalism right .we assume that system implementations will add all kinds of special - case data types ( i.e. types of labels with specialized syntax and semantics ) .we further assume that particular databases will want to introduce additional specifications . our current strategy giventhe relative lack of experience of the field in dealing with such matters is to start with a general model with very few special label types , and an open mechanism for allowing users to impose essentially arbitrary interpretations .this is how we deal with instants ( c.f . [ sec : instants ] ) , associations between annotations and files ( c.f . [ sec : associations ] ) and coindexing of arcs ( c.f . [ sec : multiple ] ) .let be a set of types , where each type in has a ( possibly open ) set of contentful elements .the label space is the union of all these sets .we write each label as a pair , allowing the same contentful element to occur in different types .( so , for example , the phoneme /a/ and the phonetic segment [ a ] can be distinguished as vs . ) annotation graphs are now defined as follows : an * annotation graph * over a label set and a node set is a set of triples having the form , , , which satisfies the following conditions : 1 . is a directed acyclic graph . is an order - preserving map assigning times to some of the nodes .there is no requirement that annotation graphs be connected or rooted , or that they cover the whole time course of the linguistic signal they describe .the set of annotation graphs is closed under union , intersection and relative complement . for convenience, we shall refer to nodes which have a time reference ( i.e. ) as _ anchored nodes_. it will also be useful to talk about annotation graphs which are minimally anchored , in the sense defined below : an * anchored annotation graph * over a label set and a node set is an annotation graph satisfying two additional conditions : 1 .if is such that for any , , then ; 2 . if is such that for any , , then .anchored annotation graphs have no dangling arcs ( or chains ) leading to an indeterminate time point .it follows from this definition that , for any unanchored node , we can reach an anchored node by following a chain of arcs .in fact every path from an unanchored node will finally take us to an anchored node .likewise , an unanchored node can be reached from an anchored node .a key property of anchored annotation graphs is that we are guaranteed to have some information about the temporal locus of every node .this property will be made explicit in [ sec : time - local ] .an examination of the annotation graphs in [ sec : survey ] will reveal that they are all anchored annotation graphs .note that the set of anchored annotation graphs is closed under union , but not under intersection or relative complement .we can also define a _ totally - anchored annotation graph _ as one in which is a total function .the annotation graphs in figures [ timit ] , [ partitur ] , [ chat2 ] and [ emu1 ] are all totally - anchored . equipped with this three - element hierarchy, we will insist that the annotation graphs that are the primary objects in linguistic databases are anchored annotation graphs .for the sake of a clean algebraic semantics for the query language , we will permit queries and the results of queries to be ( sets of ) arbitrary annotation graphs .the following definition lets us talk about two kinds of precedence relation on nodes in the graph structure .the first kind respects the graph structure ( ignoring the time references ) , and is called structure precedence , or simply _ s - precedence_. the second kind respects the temporal structure ( ignoring the graph structure ) , and is called temporal precedence , or simply _t - precedence_. a node * s - precedes * a node , written , if there is a chain from to . a node * t - precedes * a node , written , if .observe that both these relations are transitive .there is a more general notion of precedence which mixes both relations .for example , we can infer that node precedes node if we can use a mixture of structural and temporal information to get from to .this idea is formalized in the next definition .* precedence * is a binary relation on nodes , written , which is the transitive closure of the union of the s - precedes and the t - precedes relations .armed with these definitions we can now define some useful inclusion relations on arcs .the first kind of inclusion respects the graph structure , so it is called _ s - inclusion_. the second kind , _ t - inclusion _ , respects the temporal structure .an arc * s - includes * an arc , written , if and . *t - includes * , written , if and . as with node precedence ,we define a general notion of inclusion which generalizes over these two types : * inclusion * is a binary relation on arcs , written , which is the transitive closure of the union of the s - inclusion and the t - inclusion relations .note that all three inclusion relations are reflexive and transitive .we assume the existence of non - strict precedence and inclusion relations , defined in the obvious way .it is convenient to have a variety of ways of visualizing annotation graphs .most of the systems we surveyed in [ sec : survey ] come with visualization components , whether tree - based , extent - based , or some combination of these .we would endorse the use of any descriptively adequate visual notation in concert with the annotation graph formalism , so long as the notation can be endowed with an explicit formal semantics in terms of annotation graphs .note , however , that not all such visual notations can represent everything an annotation graph contains , so we still need one or more general - purpose visualizations for annotation graphs . the primary visualization chosen for annotation graphs in this paper uses networks of nodes and arcs to make the point that the mathematical objects we are dealing with are graphs . in most practical situations , this mode of visualization is cumbersome to the point of being useless .visualization techniques should be optimized for each type of data and for each application , but there are some general techniques that can be cited .observe that the direction of time - flow can be inferred from the left - to - right layout of annotation graphs , and so the arrow - heads are redundant . for simple connected sequences ( e.g. of words )the linear structure of nodes and arcs is not especially informative ; it is better to write the labels in sequence and omit the graph structure .the ubiquitous node identifiers should not be displayed unless there is accompanying text that refers to specific nodes .label types can be effectively distinguished with colors , typefaces or vertical position .we will usually need to break an annotation graph into chunks which can be presented line - by - line ( much like interlinear text ) in order to fit on a screen or a page .the applicability of these techniques depends on the fact that annotation graphs have a number of properties that do not follow automatically from a graphical notation . in other words , many directed acyclic graphs are not well - formed annotation graphs .two properties are of particular interest here .first , as noted in [ sec : formalism ] , all the annotation graphs we have surveyed are actually anchored annotation graphs .this means that every arc lies on a path of arcs that is bounded at both ends by time references .so , even when most nodes lack a time reference , we can still associate such chains with an interval of time .a second property , more contingent but equally convenient , is that annotation graphs appear to be ` rightward planar ' , i.e. they can be drawn in such a way that no arcs cross and each arc is monotonically increasing in the rightwards direction ( c.f .the definition of upward planarity in ) .these properties are put to good use in figure [ visual ] , which employs a score notation ( c.f . ) . ' '' '' the conventions employed by these diagrams are as follows .an arc is represented by a shaded rectangle , where the shading ( or color , if available ) represents the type information .where possible , arcs having the same type are displayed on the same level .arcs are labeled , but the type information is omitted .inter - arc linkages ( see [ sec : multiple ] ) are represented using coindexing .the ends of arcs are represented using short vertical lines having the same width as the rectangles .these may be omitted if the tokenization of a string is predictable .if two arcs are incident on the same node but their corresponding rectangles appear on different levels of the diagram , then the relevant endpoints are connected by a solid line .for ease of external reference , these lines can be decorated with a node identifier .anchored nodes are connected to the timeline with dotted lines .the point of intersection is labeled with a time reference . if necessary, multiple timelines may be used .nodes sharing a time reference are connected with a dotted line . in order to fit on a page, these diagrams may be cut at any point , with any partial rectangles labeled on both parts .unlike some other conceivable visualizations ( such as the tree diagrams and autosegmental diagrams used by festival and emu ) , this scheme emphasizes the fact that each component of an annotation has temporal extent .the scheme neatly handles the cases where temporal information is partial . as stated at the outset , we believe that the standardization of file formats is a secondary issue .the identification of a common conceptual framework underlying all work in this area is an earlier milestone along any path to standardization of formats and tools .that said , we believe that file formats should be transparent encodings of the annotation structure .the flattest data structure we can imagine for an annotation graph is a set of 3-tuples , one per arc , consisting of a node identifier , a label , and another node identifier .this data structure has a transparent relationship to our definition of annotation graphs , and we shall refer to it as the ` basic encoding ' .node identifiers are supplemented with time values , where available , and are wrapped with angle brackets .a file encoding for the utf example ( figure [ utf ] ) is given below .< 21/3291.29 > speaker / gloria - allred < 25/2439.82 > < 13/2391.11 > w / country < 14/2391.60 > < 11/2348.81 > spkrtype / male < 14/2391.60 > < 21/3291.29 > spkrtype / female < 25/2439.82 > <22/ > w / i < 23/2391.60 > <23/2391.60 > w / think < 24/ > < 11/2348.81 > speaker / roger - hedgecock < 14/2391.60 >w / this < 13/2391.11 > < 21/3291.29 > w / well <22/ > we make no ordering requirement , thus any reordering of these lines is taken to be equivalent .equally , any subset of the tuples comprising an annotation graph ( perhaps determined by matching a ` grep ' like pattern ) is a well - formed annotation graph .accordingly , a basic query operation on an annotation graph can be viewed as asking for subgraphs that satisfy some predicate , and each such subgraph will itself be an annotation graph .any union of the tuples comprising annotation graphs is a well - formed annotation graph , and this can be implemented by simple concatenation of the tuples ( ignoring any repeats ) .this format obviously encodes redundant information , in that nodes and their time references may be mentioned more than once .however , we believe this is a small price to pay for having a maximally simple file structure .let us consider the implications of various kinds of annotation updates for the file encoding .the addition of new nodes and arcs simply involves concatenation to the basic encoding ( recall that the basic encoding is an unordered list of arcs ) .the same goes for the addition of new arcs between existing nodes . for the user adding new annotation data to an existing read - only corpus a widespread mode of operation the new data can reside in one or more separate files , to be concatenated at load time .the insertion and modification of labels for existing arcs involves changing one line of the basic encoding .adding , changing or deleting a time reference may involve non - local change to the basic encoding of an annotation .this can be done in either of two ways : a linear scan through the basic encoding , searching for all instances of the node identifier ; or indexing into the basic encoding using the time - local index to find the relevant lines .of course , the time reference could be localized in the basic encoding by having a separate node set , referenced by the arc set .this would permit the time reference of a node to be stored just once .however , we prefer to keep the basic encoding as simple as possible .maintaining consistency of the temporal and hierarchical structure of an annotation under updates requires further consideration . in the worst case, an entire annotation structure would have to be validated after each update .to the extent that information can be localized , it is to be expected that incremental validation will be possible .this might apply after each and every update , or after a collection of updates in case there is a sequence of elementary updates which unavoidably takes us to an invalid structure along the way to a final , valid structure .our approach to the file encoding has some interesting implications in the area of so - called ` standoff markup ' . under our proposed scheme , a readonly file containing a reference annotationcan be concatenated with a file containing additional annotation material . in order for the new material to be linked to the existing material, it simply has to reuse the same node identifiers and/or have nodes anchored to the same time base .annotation deltas can employ a ` diff ' method operating at the level of individual arcs .since the file contains one line per arc and since arcs are unordered , no context needs to be specified other than the line which is being replaced or modified .a consequence of our approach is that all speech annotation ( in the broad sense ) can be construed as ` standoff ' description .corpora of annotated texts and recorded signals may range in size from a few thousand words up into the billions .the data may be in the form of a monolithic file , or it may be cut up into word - size pieces , or anything in between. the annotation might be dense as in phonetic markup or sparse as in discourse markup , and the information may be uniformly or sporadically distributed through the data . at present , the annotational components of most speech databases are still relatively small objects .only the largest annotations would cover a whole hour of speech ( or 12,000 words at 200 words per minute ) , and even then , a dense annotation of this much material would only occupy a few hundred kilobytes . in most cases , serial search of such annotations will suffice .ultimately , however , it will be necessary to devise indexing schemes ; these will necessarily be application - specific , depending on the nature of the corpus and of the queries to be expressed .the indexing method is not a property of the query language but a way to make certain kinds of query run efficiently . for large corpora , certain kinds of query might be essentially useless without such indexing . at the level of individual arc labels , we envision three simple indexes , corresponding to the three obvious dimensions of an annotation graph : a time - local index , a type - local index and a hierarchy - local index .these are discussed below .more sophisticated indexing schemes could surely be devised , for instance to support proximity search on node labels .we also assume the existence of an index for node identifiers ; a simple approach would be to sort the lines of the annotation file with respect to an ordering on the node identifiers .note that , since we wish to index linguistic databases , and not queries or query results , the indexes will assume that annotation graphs are anchored .we index the annotation graph in terms of the intervals it employs .let be the sequence of time references used by the annotation .we form the intervals .next , we assign each arc to a contiguous set of these intervals .suppose that an arc is incident on nodes which are anchored to time points and , where .then we assign the arc to the following set of intervals : now we generalize this construction to work when a time reference is missing from either or both of the nodes .first we define the _ greatest lower bound ( glb ) _ and the _ least upper bound ( lub ) _ of an arc .let be an arc .* * is the greatest time reference such that there is some node with and .* * is the least time reference such that there is some node with and . according to this definition ,the _ glb _ of an arc is the time mark of the ` greatest ' anchored node from which the arc is reachable .similarly , the _ lub _ of an arc is the time mark of the ` least ' anchored node reachable from that arc .if is anchored at both ends then and .the _ glb _ and _ lub _ are guaranteed to exist for anchored annotation graphs ( but not for annotation graphs in general ) .the _ glb _ and _ lub _ are guaranteed to be unique since is a total ordering .we can take the _ potential _ temporal span of an arc to be .we then assign the arc to a set of intervals as before .below we give an example time - local index for the utf annotation from figure [ utf ] .2348.81 2391.11 < 12/ > w / this < 13/2391.11 > < 11/2348.81 > speaker / roger - hedgecock < 14/2391.60 >< 11/2348.81 > spkrtype / male < 14/2391.60 > 2391.11 2391.29 < 13/2391.11 > w / country <14/2391.60 > < 11/2348.81 > speaker / roger - hedgecock <14/2391.60 > < 11/2348.81 > spkrtype / male <14/2391.60 > 2391.29 2391.60 < 13/2391.11 > w / country < 14/2391.60 > < 22/ > w / i < 23/2391.60 > < 21/3291.29 > w/ well < 22/ > < 21/3291.29 > speaker / gloria - allred < 25/2439.82 > < 11/2348.81 > speaker / roger - hedgecock << 21/3291.29 > spkrtype / female <25/2439.82 > < 11/2348.81 >spkrtype / male <14/2391.60 > 2391.60 2439.82 < 21/3291.29 > speaker / gloria - allred < 25/2439.82 > < 21/3291.29 > spkrtype / female <25/2439.82 > <23/2391.60 > w / think < 24/ > the index is built on a sequence of four temporal intervals which are derived from the five time references used in figure [ utf ] .observe that the right hand side of the index is made up of fully - fledged arcs ( sorted lexicographically ) , rather than references to arcs .using the longer , fully - fledged arcs has two benefits .first , it localizes the arc information on disk for fast access .second , the right hand side is a well - formed annotation graph which can be directly processed by the same tools used by the rest of any implementation , or used as a citation .this time - local index can be used for computing general overlap and inclusion relations . to find all arcs overlapping a given arc , we iterate through the list of time - intervals comprising and collect up the arcs found in the time - local index for each such interval .additional checks can be performed to see if a candidate arc is ` s - overlapped ' or ` t - overlapped ' .this process , or parts of it , could be done offline . to find all arcs included in a given arc , we can find the overlapping arcs and perform the obvious tests for s - inclusion or t - inclusion .again , this process could be done offline .an interesting property of the time - local index is that it is well - behaved when time information is partial .continuing in the same vein as the time - local index we propose a set of self - indexing structures for the types one for each type .the arcs of an annotation graph are then partitioned into types .the index for each type is a list of arcs , sorted as follows ( c.f . ) : 1 . of two arcs ,the one bearing the lexicographically earlier label appears first ; 2 .if two arcs share the same label , the one having the least _ glb _ appears first ; 3 .if two arcs share the same label and have the same _ glb _ , then the one with the larger _ lub _ appears first .w country < 13/2391.11 > w / country < 14/2391.60 >i < 22/ > w / i < 23/2391.60 > think <23/2391.60 > w / think < 24/ > this < 12/ > w/ this < 13/2391.11 >well < 21/3291.29 > w / well <22/ > speaker gloria - allred < 21/3291.29 > speaker / gloria - allred < 25/2439.82 > roger - hedgecock< 11/2348.81 > speaker / roger - hedgecock < 14/2391.60 > spkrtype female < 21/3291.29 > spkrtype / female < 25/2439.82 > male < 11/2348.81 > spkrtype / male <14/2391.60 > annotations also need to be indexed with respect to their implicit hierarchical structure ( c.f . [ sec : hierarchy ] ) . recall that we have two kinds of inclusion relation , s - inclusion ( respecting graph structure ) and t - inclusion ( respecting temporal structure ) .we refine these relations to be sensitive to an ordering on our set of types .this ordering has been left external to the formalism , since it does not fit easily into the flat structure described in [ sec : files ] .we assume the existence of a function returning the type of an arc .an arc * s - dominates * an arc , written , if and .an arc * t - dominates * an arc , written , if and .again , we can define a dominance relation which is neutral between these two , as follows : an arc * dominates * an arc , written , if and . in our current conception, s - dominance will be the most useful .( the three kinds of dominance were included for generality and consistency with the preceding discussion . )we now illustrate an index for s - dominance .suppose the ordering on types is : and .we could index the utf example as follows , ordering the arcs using the method described in [ type - local ] , and using indentation to distinguish the dominating arcs from the dominated arcs .< 11/2348.81 > speaker / roger - hedgecock < 14/2391.60 >< 11/2348.81 > spkrtype / male < 14/2391.60 > < 21/3291.29 > w / well < 22/ > < 22/ > w / i < 23/2391.60 > < 23/2391.60 >w / think < 24/ > < 21/3291.29 > speaker / gloria - allred < 25/2439.82 >< 21/3291.29 > spkrtype / female < 25/2439.82 >/ this < 13/2391.11 > < 13/2391.11 > w / country < 14/2391.60 > this concludes the discussion of proposed indexes . we have been deliberately schematic , aiming to demonstrate a range of possibilities which can be refined and extended later .note that the various indexing schemes described above just work for a single annotation .we would need to enrich the node - id and time reference information in order for this to work for whole databases of annotations ( see [ sec : extensions ] ) .it could then be generalized further , permitting search across multiple databases e.g. to find all instances of a particular word in both the switchboard and callhome english databases ( c.f . [ sec : callhome ] ) .many details about indexes could be application specific . under the approach described here, we can have several copies of an annotation where each is self - indexing in a way that localizes different kinds of information .a different approach would be to provide three categories of iterators , each of which takes an arc and returns the ` next ' arc with respect to the temporal , sortal and hierarchical structure of an annotation. it would be the task of any implementation to make sure that the basic encoding is consistent with itself , and that the conglomerate structure ( basic encoding plus indexes ) is consistent .more broadly , the design of an application - specific indexing scheme will have to consider what kinds of sequences or connections among tokens are indexed . in general , the indexing method should be based on the same elementary structures from which queries are constructed .indices will specify where particular elementary annotation graphs are to be found , so a complex search expression can be limited to those regions for which these graphs are necessary parts .there are many existing approaches to linguistic annotation , and many options for future approaches .any evaluation of proposed frameworks , including ours , depends on the costs and benefits incurred in a range of expected applications .our explorations have presupposed a particular set of ideas about applications , and therefore a particular set of goals .we think that these ideas are widely shared , but it seems useful to make them explicit . herewe are using ` framework ' as a neutral term to encompass both the definition of the logical structure of annotations , as discussed in this paper , as well as various further specifications of e.g. annotation conventions and file formats .generality , specificity , simplicity : : + annotations should be publishable ( and will often be published ) , and thus should be mutually intelligible across laboratories , disciplines , computer systems , and the passage of time . + therefore , an annotation framework should be sufficiently expressive to encompass all commonly used kinds of linguistic annotation , including sensible variants and extensions. it should be capable of managing a variety of ( partial ) information about labels , timing , and hierarchy .+ the framework should also be formally well - defined , and as simple as possible , so that researchers can easily build special - purpose tools for unforeseen applications as well as current ones , using future technology as well as current technology .searchability and browsability : : + automatic extraction of information from large annotation databases , both for scientific research and for technological development , is a key application . + therefore , annotations should be conveniently and efficiently searchable , regardless of their size and content. it should be possible to search across annotations of different material produced by different groups at different times if the content permits it without having to write special programs .partial annotations should be searchable in the same way as complete ones .+ this implies that there should be an efficient algebraic query formalism , whereby complex queries can be composed out of well - defined combinations of simple ones , and that the result of querying a set of annotations should be just another set of annotations .+ this also implies that ( for simple queries ) there should be efficient indexing schemes , providing near constant - time access into arbitrarily large annotation databases .+ the framework should also support easy ` projection ' of natural sub - parts or dimensions of annotations , both for searching and for display purposes .thus a user might want to browse a complex multidimensional annotation database or the results of a preliminary search on one as if it contained only an orthographic transcription .maintainability and durability : : + large - scale annotations are both expensive to produce and valuable to retain .however , there are always errors to be fixed , and the annotation process is in principle open - ended , as new properties can be annotated , or old ones re - done according to new principles .experience suggests that maintenance of linguistic annotations , especially across distributed edits and additions , can be a vexing and expensive task .therefore , any framework should facilitate maintenance of coherence in the face of distributed development and correction of annotations . + different dimensions of annotation should therefore be orthogonal , in the sense that changes in one dimension ( e.g. phonetic transcription ) do not entail any change in others ( e.g. discourse transcription ) , except insofar as the content necessarily overlaps .annotations of temporally separated material should likewise be modular , so that revisions to one section of an annotation do not entail global modification .queries not affected by corrections or additions should return the same thing before and after an update .+ in order to facilitate use in scientific discourse , it should be possible to define durable references which remain valid wherever possible , and produce the same results unless the referenced material itself has changed .+ note that it is easy enough to define an invertible sequence of editing operations for any way of representing linguistic annotations e.g. by means of unix ` diff ' but what we need in this case is also a way to specify the correspondence ( wherever it remains defined ) between arbitrary bits of annotation before and after the edit .furthermore , we do not want to impose any additional burden on human editors ideally , the work minimally needed to implement a change should also provide any bookkeeping needed to maintain correspondences .how well does our proposal satisfy these criteria ?we have tried to demonstrate generality , and to provide an adequate formal foundation , which is also ontologically parsimonious ( if not positively miserly ! ) .although we have not defined a query system , we have indicated the basis on which one can be constructed : ( tuple sets constituting ) annotation graphs are closed under union , intersection and relative complementation ; the set of subgraphs of an annotation graph is simply the power set of its constituent tuples ; simple pattern matching on an annotation graph can be defined to produce a set of annotation subgraphs ; etc .obvious sorts of simple predicates on temporal relations , graphical relations , label types , and label contents will clearly fit into this framework .the foundation for maintainability is present : fully orthogonal annotations ( those involving different label types and time points ) do not interact at all , while linked annotations ( such as those that share time points ) are linked only to the point that their content requires .new layers of annotation can be added monotonically , without any modification whatsoever in the representation of existing layers .corrections to existing annotations are as representationally local as they can be , given their content .although we have not provided a recipe for durable citations ( or for maintenance of trees of invertible modifications ) , the properties just cited will make it easier to develop practical approaches .in particular , the relationship between any two stages in the development or correction of an annotation will always be easy to compute as a set of basic operations on the tuples that express an annotation graph .this makes it easy to calculate just the aspects of a tree or graph of modifications that are relevant to resolving a particular citation .linguistic databases typically include important bodies of information whose structure has nothing to do with the passage of time in any particular recording , nor with the sequence of characters in any particular text .for instance , the switchboard corpus includes tables of information about callers ( including date of birth , dialect area , educational level , and sex ) , conversations ( including the speakers involved , the date , and the assigned topic ) , and so on . this side information is usually well expressed as a set of relational tables .there also may be bodies of relevant information concerning a language as a whole rather than any particular speech or text database : lexicons and grammars of various sorts are the most obvious examples .the relevant aspects of these kinds of information also often find natural expression in relational terms .users will commonly want to frame queries that combine information of these kinds with predicates defined on annotation graphs : ` find me all the phrases flagged as questions produced by south midland speakers under the age of 30 ' .the simplest way to permit this is simply to identify ( some of the ) items in a relational database with ( some of the ) labels in an annotation .this provides a limited , but useful , method for using the results of certain relational queries in posing an annotational query , or vice versa .more complex modes of interaction are also possible , as are connections to other sorts of databases ; we regard this as a fruitful area for further research .we have focused on the case of audio or video recordings , where a time base is available as a natural way to anchor annotations .this role of time can obviously be reassigned to any other well - ordered single dimension .the most obvious case is that of character- or byte - offsets into an invariant text file .this is the principle used in the so - called tipster architecture , where all annotations are associated with stretches of an underlying text , identified via byte offsets into a fixed file .we do not think that this method is normally appropriate for use with audio transcriptions , because they are so often subject to revision .as far as the annotation graph formalism is concerned , node identifiers , arc types , and arc labels are just sets . as a practical matter , members of each setwould obviously be individuated as strings .this opens the door to applications which encode arbitrary information in these strings .indeed , the notion that arc labels encode ` external ' information is fundamental to the enterprise .the whole point of the annotations is to include strings interpreted as orthographic words , speaker names , phonetic segments , file references , or whatever .these interpretations are not built into the formalism , however , and this is an equally important trait , since it determines the simplicity and generality of the framework . in the current formalization, arcs are decorated with pairs consisting of a type and a label .this structure already contains a certain amount of complexity , since the simplest kind of arc decoration would be purely atomic . in this case, we are convinced that the added value provided by label types is well worth the cost : all the bodies of annotation practice that we surveyed had some structure that was naturally expressed in terms of atomic label types , and therefore a framework in which arc decorations were just single uninterpreted strings zeroth order labels would not be expressively adequate . a first - order approach is to allow arcs to carry multiple attributes and values what amounts to a fielded record .the current formalization can be seen as providing records with just two fields .it is easy to imagine a wealth of other possible fields .such fields could identify the original annotator and the creation date of the arc .they could represent the confidence level of some other field .they could encode a complete history of successive modifications .they could provide hyperlinks to supporting material ( e.g. chapter and verse in the annotators manual for a difficult decision ) .they could provide equivalence class identifiers ( as a first - class part of the formalism rather than by the external convention as in [ sec : multiple ] ) . and they could include an arbitrarily - long sgml - structured commentary . in principle , we could go still further , and decorate arcs with arbitrarily nested attribute - value matrices ( avms ) endowed with a type system a second - order approach . these avms could contain references to other parts of the annotation , and multiple avms could contain shared substructures .substructures could be disjoined to represent the existence of more than one choice , and where separate choices are correlated the disjunctions could be coindexed ( i.e. parallel disjunction ) .appropriate attributes could depend on the local type information .a dtd - like label grammar could specify available label types , their attributes and the type ordering discussed in [ sec : hierarchy - local ] .we believe that this is a bad idea : it negates the effort that we made to provide a simple formalism expressing the essential contents of linguistic annotations in a natural and consistent way .typed feature structures are also very general and powerful devices , and entail corresponding costs in algorithmic and implementational complexity . therefore , we wind up with a less useful representation that is much harder to compute with .consider some of the effort that we have put into establishing a simple and consistent ontology for annotation . in the childes case ( [ sec : childes ] ) , we split a sentence - level annotation into a string of word - level annotations for the sake of simplifying word - level searches . in the festival case ( [ sec : festival ] ) we modeled hierarchical information using the syntactic chart construction .because of these choices , childes and festival annotations become formally commensurate they can be searched or displayed in exactly the same terms . with labels as typed feature structures , whole sentences , whole tree structures , andindeed whole databases could be packed into single labels .we could therefore have chosen to translate childes and festival formats directly into typed feature structures .if we had done this , however , the relationship between simple concepts shared by the two formats such as lexical tokens and time references would remain opaque . for these reasons, we would like to remain cautious about adding to the ontology of our formalism .however , several simple extensions seem well worth considering .perhaps the simplest one is to add a single additional field to arc decorations , called the ` comment ' , which would be formally uninterpreted , but could be used in arbitrary ( and perhaps temporary ) ways by implementations .it could be used to add commentary , or to encode the authorship of the label , or indicate who has permission to edit it , or in whatever other way .another possibility would be to add a field for encoding equivalence classes of arcs directly , rather than by the indirect means specified earlier .our preference is to extend the formalism cautiously , where it seems that many applications will want a particular capability , and to offer a simple mechanism to permit local or experimental extensions , while advising that it be used sparingly .finally , we note in passing that the same freedom for enriching arc labels applies to node identifiers .we have not given any examples in which node identifiers are anything other than digit strings .however , as with labels , in the general case a node identifier could encode an arbitrarily complex data structure .for instance , it could be used to encode the source of a time reference , or to give a variant reference ( such as a video frame number , c.f . [ sec : offsets ] ) , or to specify whether a time reference is missing because it is simply not known or it is inappropriate ( c.f . [ sec : childes ] , [ sec : lacito ] ) .unlike the situation with arc labels , this step is always harmless ( except that implementations that do not understand it will be left in the dark ) . only string identity matters to the formalism , and node identifiersdo not ( in our work so far ) have any standard interpretation outside the formalism .we have claimed that annotation graphs can provide an interlingua for varied annotation databases , a formal foundation for queries on such databases , and a route to easier development and maintenance of such databases .delivering on these promises will require software .since we have made only some preliminary explorations so far , it would be best to remain silent on the question until we have some experience to report .however , for those readers who agree with us that this is an essential point , we will sketch our current perspective . as our catalogue of examples indicated , it is fairly easy to translate between other speech database formats and annotation graphs , and we have already built translators in several cases .we are also experimenting with simple software for creation , visualization , editing , validation , indexing , and search .our first goal is an open collection of relatively simple tools that are easy to prototype and to modify , in preference to a monolithic ` annotation graph environment . 'however , we are also committed to the idea that tools for creating and using linguistic annotations should be widely accessible to computationally unsophisticated users , which implies that eventually such tools need to be encapsulated in reliable and simple interactive form .other researchers have also begun to experiment with the annotation graph concept as a basis for their software tools , and a key index of the idea s success will of course be the extent to which tools are provided by others .existing open - source software such as transcriber , snack , and the isip transcriber tool [ ] , whose user interfaces are all implemented in tcl / tk , make it easy to create interactive tools for creation , visualization , and editing of annotation graphs . for instance , transcriber can be used without any changes to produce transcriptions in the ldc broadcast news format , which can then be translated into annotation graphs .provision of simple input / output functions enables the program to read and write annotation graphs directly .the architecture of the current tool is not capable of dealing with arbitrary annotation graphs , but generalizations in that direction are planned .an annotation may need to be submitted to a variety of validation checks , for basic syntax , content and larger - scale structure .first , we need to be able to tokenize and parse an annotation , without having to write new tokenizers and parsers for each new task .we also need to undertake some superficial syntax checking , to make sure that brackets and quotes balance , and so on . in the sgml realm , this need is partially met by dtds .we propose to meet the same need by developing conversion and creation tools that read and write well - formed graphs , and by input / output modules that can be used in the further forms of validation cited below .second , various content checks need to be performed .for instance , are purported phonetic segment labels actually members of a designated class of phonetic symbols or strings ?are things marked as ` non - lexemic vocalizations ' drawn from the officially approved list ?do regular words appear in the spell - check dictionary ?do capital letters occur in legal positions ?these checks are not difficult to implement , e.g. as perl scripts , especially given a module for handling basic operations correctly .finally , we need to check for correctness of hierarchies of arcs .are phonetic segments all inside words , which are all inside phrases , which are all inside conversational turns , which are all inside conversations ?again , it is easy to define such checks in a software environment that has appropriately expressive primitives ( e.g. a perl annotation graph module ) .indexing of the types discussed earlier ( [ sec : indexing ] ) , is well defined , algorithmically simple , and easy to implement in a general way .construction of general query systems , however , is a matter that needs to be explored more fully in order to decide on the details of the query primitives and the methods for building complex queries , and also to try out different ways to express queries . among the many questions to be exploredare : 1 .how to express general graph- and time - relations ; 2 . how to integrate regular expression matching over labels ; 3 . how to integrate annotation - graph queries and relational queries ; 4 . how to integrate lexicons and other external resources ; 5 .how to model sets of databases , each of which contains sets of annotation graphs , signals and perhaps relational side - information .it is easy to come up with answers to each of these questions , and it is also easy to try the answers out , for instance in the context of a collection of perl modules providing the needed primitive operations . we regard it as an open research problem to find good answers that interact well , and also to find good ways to express queries in the system that those answers will define . whether or not our ideas are accepted by the various research communities who create and use linguistic annotations , we hope to foster discussion and cooperation among members of these communities .a focal point of this effort is the linguistic annotation page at [ ] .when we look at the numerous and diverse forms of linguistic annotation documented on that page , we see underlying similarities that have led us to imagine general methods for access and search , and shared tools for creation and maintenance .we hope that this discussion will move others in the same direction .an earlier version of this paper was presented at icslp-98 .we are grateful to the following people for discussions which have helped clarify our ideas about annotations , and for comments on earlier drafts : peter buneman , steve cassidy , chris cieri , hamish cunningham , david graff , ewan klein , brian macwhinney , boyd michailovsky , florian schiel , richard sproat , paul taylor , henry thompson , peter wittenburg , jonathan wright , and participants of the cocosda workshop at icslp-98 .t. altosaar , m. karjalainen , m. vainio , and e. meister . and estonian speech applications developed on an object - oriented speech processing and database system . in _ proceedings of the first international conference on language resources and evaluation workshop : speech database development for central and eastern european languages _ , 1998 .granada , spain , may 1998 .claude barras , edouard geoffrois , zhibiao wu , and mark liberman .transcriber : a free tool for segmenting , labelling and transcribing speech . in _ proceedings of the first international conference on language resources and evaluation _ , 1998 .steven bird . a lexical database tool for quantitative phonological research . in _ proceedings of the third meeting of the acl special interest group in computational phonology_. association for computational linguistics , 1997 .steve cassidy and jonathan harrington .emu : an enhanced hierarchical speech data management system . in_ proceedings of the sixth australian international conference on speech science and technology _ , 1996 . .konrad ehlich . a transcription system for discourse data . in janea. edwards and martin d. lampert , editors , _ talking data : transcription and coding in discourse research _ , pages 12348 .hillsdale , nj : erlbaum , 1992 .j. j. godfrey , e. c. holliman , and j. mcdaniel .switchboard : a telephone speech corpus for research and develpment . in _ proceedings of the ieee conference on acoustics , speech and signal processing _, volume i , pages 51720 , 1992 .susan r. hertz .the delta programming language : an integrated approach to nonlinear phonology , phonetics , and speech synthesis . in johnkingston and mary e. beckman , editors , _ papers in laboratory phonology i : between the grammar and physics of speech _ , chapter 13 , pages 21557 .cambridge university press , 1990 .daniel jurafsky , rebecca bates , noah coccaro , rachel martin , marie meteer , klaus ries , elizabeth shriberg , andreas stolcke , paul taylor , and carol van ess - dykema .automatic detection of discourse structure for speech recognition and understanding . in _ proceedings of the 1997 ieee workshop on speech recognition and understanding _ ,pages 8895 , santa barbara , 1997 .daniel jurafsky , elizabeth shriberg , and debra biasca .witchboard swbd - damsl labeling project coder s manual , draft 13 .technical report 97 - 02 , university of colorado institute of cognitive science , 1997 . .ron sacks - davis , tuong dao , james a. thom , and justin zobel .indexing documents for queries on structure , content and attributes . in _ international symposium on digital media information base _ , pages 23645 , 1997 .
` linguistic annotation ' covers any descriptive or analytic notations applied to raw language data . the basic data may be in the form of time functions audio , video and/or physiological recordings or it may be textual . the added notations may include transcriptions of all sorts ( from phonetic features to discourse structures ) , part - of - speech and sense tagging , syntactic analysis , ` named entity ' identification , co - reference annotation , and so on . while there are several ongoing efforts to provide formats and tools for such annotations and to publish annotated linguistic databases , the lack of widely accepted standards is becoming a critical problem . proposed standards , to the extent they exist , have focussed on file formats . this paper focuses instead on the logical structure of linguistic annotations . we survey a wide variety of existing annotation formats and demonstrate a common conceptual core , the _ annotation graph_. this provides a formal framework for constructing , maintaining and searching linguistic annotations , while remaining consistent with many alternative data structures and file formats .